Test Report: KVM_Linux 22353

                    
                      dccbb7bb926f2ef30a57d8898bfc971889daa155:2025-12-29:43039
                    
                

Test fail (22/370)

x
+
TestFunctional/serial/SoftStart (484.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1229 06:53:22.195105   13486 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695625 --alsologtostderr -v=8
E1229 06:53:43.099814   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.105120   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.115489   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.135883   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.176239   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.256606   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.417087   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:43.737769   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:44.378852   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:45.659454   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:48.219619   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:53.340688   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:54:03.581743   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:54:24.062620   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:55:05.023876   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:26.944682   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:58:43.093536   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:10.791976   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695625 --alsologtostderr -v=8: exit status 81 (6m30.107929216s)

                                                
                                                
-- stdout --
	* [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:53:22.250786   17440 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:53:22.251073   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251082   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:53:22.251087   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251322   17440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 06:53:22.251807   17440 out.go:368] Setting JSON to false
	I1229 06:53:22.252599   17440 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2152,"bootTime":1766989050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:53:22.252669   17440 start.go:143] virtualization: kvm guest
	I1229 06:53:22.254996   17440 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:53:22.256543   17440 notify.go:221] Checking for updates...
	I1229 06:53:22.256551   17440 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:53:22.258115   17440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:53:22.259464   17440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:22.260823   17440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:53:22.262461   17440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:53:22.263830   17440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:53:22.265499   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.265604   17440 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:53:22.301877   17440 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 06:53:22.303062   17440 start.go:309] selected driver: kvm2
	I1229 06:53:22.303099   17440 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.303255   17440 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:53:22.304469   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:22.304541   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:22.304607   17440 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.304716   17440 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:53:22.306617   17440 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 06:53:22.307989   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:22.308028   17440 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 06:53:22.308037   17440 cache.go:65] Caching tarball of preloaded images
	I1229 06:53:22.308172   17440 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 06:53:22.308185   17440 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 06:53:22.308288   17440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 06:53:22.308499   17440 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 06:53:22.308543   17440 start.go:364] duration metric: took 25.28µs to acquireMachinesLock for "functional-695625"
	I1229 06:53:22.308555   17440 start.go:96] Skipping create...Using existing machine configuration
	I1229 06:53:22.308560   17440 fix.go:54] fixHost starting: 
	I1229 06:53:22.310738   17440 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 06:53:22.310765   17440 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 06:53:22.313927   17440 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 06:53:22.313960   17440 machine.go:94] provisionDockerMachine start ...
	I1229 06:53:22.317184   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317690   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.317748   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317941   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.318146   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.318156   17440 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 06:53:22.424049   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.424102   17440 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 06:53:22.427148   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427685   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.427715   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427957   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.428261   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.428280   17440 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 06:53:22.552563   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.555422   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.555807   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.555834   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.556061   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.556278   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.556302   17440 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 06:53:22.661438   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:22.661470   17440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 06:53:22.661505   17440 buildroot.go:174] setting up certificates
	I1229 06:53:22.661529   17440 provision.go:84] configureAuth start
	I1229 06:53:22.664985   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.665439   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.665459   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.667758   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668124   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.668145   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668257   17440 provision.go:143] copyHostCerts
	I1229 06:53:22.668280   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668308   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 06:53:22.668317   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668383   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 06:53:22.668476   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668505   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 06:53:22.668512   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668541   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 06:53:22.668582   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668598   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 06:53:22.668603   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668632   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 06:53:22.668676   17440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 06:53:22.746489   17440 provision.go:177] copyRemoteCerts
	I1229 06:53:22.746545   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 06:53:22.749128   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749596   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.749616   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749757   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:22.836885   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 06:53:22.836959   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 06:53:22.872390   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 06:53:22.872481   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 06:53:22.908829   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 06:53:22.908896   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 06:53:22.941014   17440 provision.go:87] duration metric: took 279.457536ms to configureAuth
	I1229 06:53:22.941053   17440 buildroot.go:189] setting minikube options for container-runtime
	I1229 06:53:22.941277   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.944375   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.944857   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.944916   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.945128   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.945387   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.945402   17440 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 06:53:23.052106   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 06:53:23.052136   17440 buildroot.go:70] root file system type: tmpfs
	I1229 06:53:23.052304   17440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 06:53:23.055887   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056416   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.056446   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056629   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.056893   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.056961   17440 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 06:53:23.183096   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 06:53:23.186465   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.186943   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.187006   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.187227   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.187475   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.187494   17440 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 06:53:23.306011   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:23.306077   17440 machine.go:97] duration metric: took 992.109676ms to provisionDockerMachine
	I1229 06:53:23.306099   17440 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 06:53:23.306114   17440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 06:53:23.306201   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 06:53:23.309537   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.309944   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.309967   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.310122   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.393657   17440 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 06:53:23.398689   17440 command_runner.go:130] > NAME=Buildroot
	I1229 06:53:23.398723   17440 command_runner.go:130] > VERSION=2025.02
	I1229 06:53:23.398731   17440 command_runner.go:130] > ID=buildroot
	I1229 06:53:23.398737   17440 command_runner.go:130] > VERSION_ID=2025.02
	I1229 06:53:23.398745   17440 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1229 06:53:23.398791   17440 info.go:137] Remote host: Buildroot 2025.02
	I1229 06:53:23.398821   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 06:53:23.398897   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 06:53:23.398981   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 06:53:23.398993   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 06:53:23.399068   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 06:53:23.399075   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> /etc/test/nested/copy/13486/hosts
	I1229 06:53:23.399114   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 06:53:23.412045   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:23.445238   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 06:53:23.479048   17440 start.go:296] duration metric: took 172.930561ms for postStartSetup
	I1229 06:53:23.479099   17440 fix.go:56] duration metric: took 1.170538464s for fixHost
	I1229 06:53:23.482307   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.482761   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.482808   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.483049   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.483313   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.483327   17440 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 06:53:23.586553   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766991203.580410695
	
	I1229 06:53:23.586572   17440 fix.go:216] guest clock: 1766991203.580410695
	I1229 06:53:23.586579   17440 fix.go:229] Guest: 2025-12-29 06:53:23.580410695 +0000 UTC Remote: 2025-12-29 06:53:23.479103806 +0000 UTC m=+1.278853461 (delta=101.306889ms)
	I1229 06:53:23.586594   17440 fix.go:200] guest clock delta is within tolerance: 101.306889ms
	I1229 06:53:23.586598   17440 start.go:83] releasing machines lock for "functional-695625", held for 1.278049275s
	I1229 06:53:23.590004   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.590438   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.590463   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.591074   17440 ssh_runner.go:195] Run: cat /version.json
	I1229 06:53:23.591186   17440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 06:53:23.594362   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594454   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594831   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.594868   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594954   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.595021   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.595083   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.595278   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.692873   17440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1229 06:53:23.692948   17440 command_runner.go:130] > {"iso_version": "v1.37.0-1766979747-22353", "kicbase_version": "v0.0.48-1766884053-22351", "minikube_version": "v1.37.0", "commit": "f5189b2bdbb6990e595e25e06a017f8901d29fa8"}
	I1229 06:53:23.693063   17440 ssh_runner.go:195] Run: systemctl --version
	I1229 06:53:23.700357   17440 command_runner.go:130] > systemd 256 (256.7)
	I1229 06:53:23.700393   17440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1229 06:53:23.700501   17440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 06:53:23.707230   17440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1229 06:53:23.707369   17440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 06:53:23.707433   17440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 06:53:23.719189   17440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 06:53:23.719220   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:23.719246   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:23.719351   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:23.744860   17440 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1229 06:53:23.744940   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 06:53:23.758548   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 06:53:23.773051   17440 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 06:53:23.773122   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 06:53:23.786753   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.800393   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 06:53:23.813395   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.826600   17440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 06:53:23.840992   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 06:53:23.854488   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 06:53:23.869084   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 06:53:23.882690   17440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 06:53:23.894430   17440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1229 06:53:23.894542   17440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 06:53:23.912444   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:24.139583   17440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 06:53:24.191402   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:24.191457   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:24.191521   17440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 06:53:24.217581   17440 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1229 06:53:24.217604   17440 command_runner.go:130] > [Unit]
	I1229 06:53:24.217609   17440 command_runner.go:130] > Description=Docker Application Container Engine
	I1229 06:53:24.217615   17440 command_runner.go:130] > Documentation=https://docs.docker.com
	I1229 06:53:24.217626   17440 command_runner.go:130] > After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1229 06:53:24.217631   17440 command_runner.go:130] > Wants=network-online.target containerd.service
	I1229 06:53:24.217635   17440 command_runner.go:130] > Requires=docker.socket
	I1229 06:53:24.217638   17440 command_runner.go:130] > StartLimitBurst=3
	I1229 06:53:24.217642   17440 command_runner.go:130] > StartLimitIntervalSec=60
	I1229 06:53:24.217646   17440 command_runner.go:130] > [Service]
	I1229 06:53:24.217649   17440 command_runner.go:130] > Type=notify
	I1229 06:53:24.217653   17440 command_runner.go:130] > Restart=always
	I1229 06:53:24.217660   17440 command_runner.go:130] > ExecStart=
	I1229 06:53:24.217694   17440 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1229 06:53:24.217710   17440 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1229 06:53:24.217748   17440 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1229 06:53:24.217761   17440 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1229 06:53:24.217767   17440 command_runner.go:130] > LimitNOFILE=infinity
	I1229 06:53:24.217782   17440 command_runner.go:130] > LimitNPROC=infinity
	I1229 06:53:24.217790   17440 command_runner.go:130] > LimitCORE=infinity
	I1229 06:53:24.217818   17440 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1229 06:53:24.217828   17440 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1229 06:53:24.217833   17440 command_runner.go:130] > TasksMax=infinity
	I1229 06:53:24.217840   17440 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1229 06:53:24.217847   17440 command_runner.go:130] > Delegate=yes
	I1229 06:53:24.217855   17440 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1229 06:53:24.217864   17440 command_runner.go:130] > KillMode=process
	I1229 06:53:24.217871   17440 command_runner.go:130] > OOMScoreAdjust=-500
	I1229 06:53:24.217881   17440 command_runner.go:130] > [Install]
	I1229 06:53:24.217896   17440 command_runner.go:130] > WantedBy=multi-user.target
	I1229 06:53:24.217973   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.255457   17440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 06:53:24.293449   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.313141   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 06:53:24.332090   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:24.359168   17440 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1229 06:53:24.359453   17440 ssh_runner.go:195] Run: which cri-dockerd
	I1229 06:53:24.364136   17440 command_runner.go:130] > /usr/bin/cri-dockerd
	I1229 06:53:24.364255   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 06:53:24.377342   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 06:53:24.400807   17440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 06:53:24.632265   17440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 06:53:24.860401   17440 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 06:53:24.860544   17440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 06:53:24.885002   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 06:53:24.902479   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:25.138419   17440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 06:53:48.075078   17440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.936617903s)
	I1229 06:53:48.075181   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 06:53:48.109404   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 06:53:48.160259   17440 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 06:53:48.213352   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:48.231311   17440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 06:53:48.408709   17440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 06:53:48.584722   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.754219   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 06:53:48.798068   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 06:53:48.815248   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.983637   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 06:53:49.117354   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:49.139900   17440 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 06:53:49.139985   17440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 06:53:49.146868   17440 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1229 06:53:49.146900   17440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1229 06:53:49.146910   17440 command_runner.go:130] > Device: 0,23	Inode: 2092        Links: 1
	I1229 06:53:49.146918   17440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1229 06:53:49.146926   17440 command_runner.go:130] > Access: 2025-12-29 06:53:49.121969518 +0000
	I1229 06:53:49.146933   17440 command_runner.go:130] > Modify: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146940   17440 command_runner.go:130] > Change: 2025-12-29 06:53:49.012958222 +0000
	I1229 06:53:49.146947   17440 command_runner.go:130] >  Birth: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146986   17440 start.go:574] Will wait 60s for crictl version
	I1229 06:53:49.147040   17440 ssh_runner.go:195] Run: which crictl
	I1229 06:53:49.152717   17440 command_runner.go:130] > /usr/bin/crictl
	I1229 06:53:49.152823   17440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 06:53:49.184154   17440 command_runner.go:130] > Version:  0.1.0
	I1229 06:53:49.184179   17440 command_runner.go:130] > RuntimeName:  docker
	I1229 06:53:49.184183   17440 command_runner.go:130] > RuntimeVersion:  28.5.2
	I1229 06:53:49.184188   17440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1229 06:53:49.184211   17440 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 06:53:49.184266   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.212414   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.213969   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.257526   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.262261   17440 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 06:53:49.266577   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267255   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:49.267298   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267633   17440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 06:53:49.286547   17440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1229 06:53:49.286686   17440 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 06:53:49.286896   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:49.286965   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.324994   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.325029   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.325037   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.325045   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.325052   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.325060   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.325067   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.325074   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.325113   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.325127   17440 docker.go:624] Images already preloaded, skipping extraction
	I1229 06:53:49.325191   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.352256   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.352294   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.352301   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.352309   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.352315   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.352323   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.352349   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.352361   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.352398   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.352412   17440 cache_images.go:86] Images are preloaded, skipping loading
	I1229 06:53:49.352427   17440 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 06:53:49.352542   17440 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 06:53:49.352611   17440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 06:53:49.466471   17440 command_runner.go:130] > systemd
	I1229 06:53:49.469039   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:49.469084   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:49.469108   17440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 06:53:49.469137   17440 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 06:53:49.469275   17440 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 06:53:49.469338   17440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 06:53:49.495545   17440 command_runner.go:130] > kubeadm
	I1229 06:53:49.495573   17440 command_runner.go:130] > kubectl
	I1229 06:53:49.495580   17440 command_runner.go:130] > kubelet
	I1229 06:53:49.495602   17440 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 06:53:49.495647   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 06:53:49.521658   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 06:53:49.572562   17440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 06:53:49.658210   17440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1229 06:53:49.740756   17440 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 06:53:49.746333   17440 command_runner.go:130] > 192.168.39.121	control-plane.minikube.internal
	I1229 06:53:49.746402   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:50.073543   17440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:53:50.148789   17440 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 06:53:50.148837   17440 certs.go:195] generating shared ca certs ...
	I1229 06:53:50.148860   17440 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:53:50.149082   17440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 06:53:50.149152   17440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 06:53:50.149169   17440 certs.go:257] generating profile certs ...
	I1229 06:53:50.149320   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 06:53:50.149413   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 06:53:50.149478   17440 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 06:53:50.149490   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 06:53:50.149508   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 06:53:50.149525   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 06:53:50.149541   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 06:53:50.149556   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 06:53:50.149573   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 06:53:50.149588   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 06:53:50.149607   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 06:53:50.149673   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 06:53:50.149723   17440 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 06:53:50.149738   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 06:53:50.149776   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 06:53:50.149837   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 06:53:50.149873   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 06:53:50.149950   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:50.150003   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:50.150023   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 06:53:50.150038   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 06:53:50.150853   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 06:53:50.233999   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 06:53:50.308624   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 06:53:50.436538   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 06:53:50.523708   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 06:53:50.633239   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 06:53:50.746852   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 06:53:50.793885   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 06:53:50.894956   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 06:53:50.955149   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 06:53:51.018694   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 06:53:51.084938   17440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 06:53:51.127238   17440 ssh_runner.go:195] Run: openssl version
	I1229 06:53:51.136812   17440 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1229 06:53:51.136914   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.154297   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 06:53:51.175503   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182560   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182600   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182653   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.195355   17440 command_runner.go:130] > b5213941
	I1229 06:53:51.195435   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 06:53:51.217334   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.233542   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 06:53:51.248778   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255758   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255826   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255874   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.272983   17440 command_runner.go:130] > 51391683
	I1229 06:53:51.273077   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 06:53:51.303911   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.325828   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 06:53:51.347788   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360429   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360567   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360625   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.369235   17440 command_runner.go:130] > 3ec20f2e
	I1229 06:53:51.369334   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 06:53:51.381517   17440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387517   17440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387548   17440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1229 06:53:51.387554   17440 command_runner.go:130] > Device: 253,1	Inode: 1052441     Links: 1
	I1229 06:53:51.387560   17440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1229 06:53:51.387568   17440 command_runner.go:130] > Access: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387572   17440 command_runner.go:130] > Modify: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387577   17440 command_runner.go:130] > Change: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387581   17440 command_runner.go:130] >  Birth: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387657   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 06:53:51.396600   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.397131   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 06:53:51.410180   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.410283   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 06:53:51.419062   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.419164   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 06:53:51.431147   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.431222   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 06:53:51.441881   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.442104   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 06:53:51.450219   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.450295   17440 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:51.450396   17440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.474716   17440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 06:53:51.489086   17440 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1229 06:53:51.489107   17440 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1229 06:53:51.489113   17440 command_runner.go:130] > /var/lib/minikube/etcd:
	I1229 06:53:51.489117   17440 command_runner.go:130] > member
	I1229 06:53:51.489676   17440 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 06:53:51.489694   17440 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 06:53:51.489753   17440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 06:53:51.503388   17440 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:51.503948   17440 kubeconfig.go:125] found "functional-695625" server: "https://192.168.39.121:8441"
	I1229 06:53:51.504341   17440 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:51.504505   17440 kapi.go:59] client config for functional-695625: &rest.Config{Host:"https://192.168.39.121:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 06:53:51.504963   17440 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 06:53:51.504986   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 06:53:51.504992   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 06:53:51.504998   17440 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 06:53:51.505004   17440 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 06:53:51.505012   17440 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 06:53:51.505089   17440 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 06:53:51.505414   17440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 06:53:51.521999   17440 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.121
	I1229 06:53:51.522047   17440 kubeadm.go:1161] stopping kube-system containers ...
	I1229 06:53:51.522115   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.550376   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.550407   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:51.550415   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:51.550422   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:51.550432   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:51.550441   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:51.550448   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:51.550455   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:51.550462   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:51.550470   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:51.550478   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:51.550485   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:51.550491   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:51.550496   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:51.550505   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:51.550511   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:51.550516   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:51.550521   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:51.550528   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:51.550540   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:51.550548   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:51.550556   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:51.550566   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:51.550572   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:51.550582   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:51.552420   17440 docker.go:487] Stopping containers: [6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629]
	I1229 06:53:51.552499   17440 ssh_runner.go:195] Run: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629
	I1229 06:53:51.976888   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.976911   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:58.789216   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:58.789240   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:58.789248   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:58.789252   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:58.789256   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:58.789259   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:58.789262   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:58.789266   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:58.789269   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:58.789272   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:58.789275   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:58.789278   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:58.789281   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:58.789284   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:58.789287   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:58.789295   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:58.789299   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:58.789303   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:58.789306   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:58.789310   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:58.789314   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:58.789317   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:58.789321   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:58.790986   17440 ssh_runner.go:235] Completed: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629: (7.238443049s)
	I1229 06:53:58.791057   17440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 06:53:58.833953   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857522   17440 command_runner.go:130] > -rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	I1229 06:53:58.857550   17440 command_runner.go:130] > -rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.857561   17440 command_runner.go:130] > -rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.857571   17440 command_runner.go:130] > -rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857610   17440 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	
	I1229 06:53:58.857671   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:53:58.875294   17440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1229 06:53:58.876565   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.896533   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.896617   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:53:58.917540   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.936703   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.936777   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.957032   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.970678   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.970742   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:53:58.992773   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:53:59.007767   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.061402   17440 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 06:53:59.061485   17440 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1229 06:53:59.061525   17440 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1229 06:53:59.061923   17440 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 06:53:59.062217   17440 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1229 06:53:59.062329   17440 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1229 06:53:59.062606   17440 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1229 06:53:59.062852   17440 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1229 06:53:59.062948   17440 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1229 06:53:59.063179   17440 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 06:53:59.063370   17440 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 06:53:59.063615   17440 command_runner.go:130] > [certs] Using the existing "sa" key
	I1229 06:53:59.066703   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.686012   17440 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 06:53:59.686050   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1229 06:53:59.686059   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1229 06:53:59.686069   17440 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 06:53:59.686078   17440 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 06:53:59.686087   17440 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 06:53:59.686203   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.995495   17440 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 06:53:59.995529   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 06:53:59.995539   17440 command_runner.go:130] > [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 06:53:59.995545   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 06:53:59.995549   17440 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1229 06:53:59.995615   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.047957   17440 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 06:54:00.047983   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 06:54:00.053966   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 06:54:00.056537   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 06:54:00.059558   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.175745   17440 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 06:54:00.175825   17440 api_server.go:52] waiting for apiserver process to appear ...
	I1229 06:54:00.175893   17440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:00.233895   17440 command_runner.go:130] > 2416
	I1229 06:54:00.233940   17440 api_server.go:72] duration metric: took 58.126409ms to wait for apiserver process to appear ...
	I1229 06:54:00.233953   17440 api_server.go:88] waiting for apiserver healthz status ...
	I1229 06:54:00.233976   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:05.236821   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:05.236865   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:10.239922   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:10.239956   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:15.242312   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:15.242347   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:20.245667   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:20.245726   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:25.248449   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:25.248501   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:30.249241   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:30.249279   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:35.251737   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:35.251771   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:40.254366   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:40.254407   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:45.257232   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:45.257275   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:50.259644   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:50.259685   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:55.261558   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:55.261592   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:55:00.263123   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:55:00.263241   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:55:00.287429   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:55:00.288145   17440 logs.go:282] 1 containers: [fb6db97d8ffe]
	I1229 06:55:00.288289   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:55:00.310519   17440 command_runner.go:130] > d81259f64136
	I1229 06:55:00.310561   17440 logs.go:282] 1 containers: [d81259f64136]
	I1229 06:55:00.310630   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:55:00.334579   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:55:00.334624   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:55:00.334692   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:55:00.353472   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:55:00.353503   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:55:00.354626   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:55:00.354714   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:55:00.376699   17440 command_runner.go:130] > 8911777281f4
	I1229 06:55:00.378105   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:55:00.378188   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:55:00.397976   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:55:00.399617   17440 logs.go:282] 1 containers: [17fe16a2822a]
	I1229 06:55:00.399707   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:55:00.419591   17440 logs.go:282] 0 containers: []
	W1229 06:55:00.419617   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:55:00.419665   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:55:00.440784   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:55:00.441985   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:55:00.442020   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:55:00.442030   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:55:00.465151   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.465192   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:55:00.465226   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.465237   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:55:00.465255   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.465271   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:55:00.465285   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.465823   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:55:00.465845   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:55:00.487618   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:55:00.487646   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:55:00.508432   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.508468   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:55:00.508482   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:55:00.508508   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:55:00.508521   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:55:00.508529   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.508541   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:55:00.508551   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.508560   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:55:00.508568   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.510308   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:55:00.510337   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:55:00.531862   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.532900   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:55:00.532924   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:55:00.554051   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:55:00.554084   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.554095   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:55:00.554109   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:55:00.554131   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:55:00.554148   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:55:00.554170   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:55:00.554189   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:55:00.554195   17440 command_runner.go:130] !  >
	I1229 06:55:00.554208   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:55:00.554224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:55:00.554250   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:55:00.554261   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:55:00.554273   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.554316   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:55:00.554327   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:55:00.554339   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:55:00.554350   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:55:00.554366   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:55:00.554381   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:55:00.554390   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:55:00.554402   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:55:00.554414   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:55:00.554427   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:55:00.554437   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:55:00.554452   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:55:00.556555   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:55:00.556578   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:55:00.581812   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:55:00.581848   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:55:00.581857   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:55:00.581865   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581874   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581881   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:55:00.581890   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581911   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:55:00.581919   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581930   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581942   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581949   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581957   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581964   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581975   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581985   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581993   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582003   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582010   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582020   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582030   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582037   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582044   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582051   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582070   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582080   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582088   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582097   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582105   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582115   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582125   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582141   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582152   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582160   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582170   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582177   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582186   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582193   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582203   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582211   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582221   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582228   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582235   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582242   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582252   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582261   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582269   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582276   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582287   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582294   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582302   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582312   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582319   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582329   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582336   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582346   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582353   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582363   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582370   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582378   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582385   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.586872   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:55:00.586916   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:55:00.609702   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.609731   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.609766   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.609784   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.609811   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.609822   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:55:00.609831   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:55:00.609842   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609848   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.609857   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:55:00.609865   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.609879   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.609890   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.609906   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.609915   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.609923   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:55:00.609943   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.609954   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:55:00.609966   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.609976   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.609983   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:55:00.609990   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609998   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610006   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610016   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610024   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610041   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610050   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610070   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610082   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610091   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610100   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610107   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:55:00.610115   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610123   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610131   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610141   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610159   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610168   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610179   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.610191   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.610203   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610216   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610223   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610231   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610242   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:55:00.610251   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610258   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610265   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610271   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:55:00.610290   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610303   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610323   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610335   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610345   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610355   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610374   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610384   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610394   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610404   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610412   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610422   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610429   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610439   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610447   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610455   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610470   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.610476   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610483   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610491   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610500   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610508   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610516   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.610523   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610531   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610538   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610550   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610559   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610567   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610573   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610579   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610595   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610607   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:55:00.610615   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610622   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610630   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610637   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610644   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:55:00.610653   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610669   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610680   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610692   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610705   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610713   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610735   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610744   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610755   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610765   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610772   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610781   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610789   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610809   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610818   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610824   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610853   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610867   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610881   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610896   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610909   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:55:00.610922   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610936   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610949   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610964   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610979   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.610995   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611010   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611021   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611037   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.611048   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.611062   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.611070   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.611079   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:55:00.611087   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.611096   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.611102   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.611109   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:55:00.611118   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.611125   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:55:00.611135   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.611146   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.611157   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.611167   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.611179   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.611186   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:55:00.611199   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611226   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611266   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611281   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611295   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611310   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611325   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611342   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611355   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611370   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611382   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611404   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.611417   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:55:00.611435   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:55:00.611449   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:55:00.611464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:55:00.611476   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:55:00.611491   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:55:00.611502   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:55:00.611517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:55:00.611529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:55:00.611544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:55:00.611558   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:55:00.611574   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:55:00.611586   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:55:00.611601   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:55:00.611617   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:55:00.611631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:55:00.611645   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:55:00.611660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:55:00.611674   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:55:00.611689   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:55:00.611702   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:55:00.611712   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:55:00.611722   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.611732   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.611740   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.611751   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.611759   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.611767   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.611835   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.611849   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.611867   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:55:00.611877   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611888   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.611894   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.611901   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:55:00.611909   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611917   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.611929   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.611937   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.611946   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.611954   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.611963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.611971   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.611981   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.611990   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.611999   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.612006   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.612019   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612031   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612046   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612063   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612079   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612093   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612112   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:55:00.612128   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612142   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612157   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612171   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612185   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612201   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612217   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612230   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612245   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612259   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612274   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612293   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:55:00.612309   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612323   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:55:00.612338   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:55:00.612354   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612366   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612380   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:55:00.612394   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.612407   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:55:00.629261   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:55:00.629293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:55:00.671242   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:55:00.671279   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       About a minute ago   Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671293   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:55:00.671303   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       About a minute ago   Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:55:00.671315   17440 command_runner.go:130] > fb6db97d8ffe4       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            1                   4ed2797334771       kube-apiserver-functional-695625            kube-system
	I1229 06:55:00.671327   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:55:00.671337   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       About a minute ago   Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671347   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:55:00.671362   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       2 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:55:00.673604   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:55:00.673628   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:55:00.695836   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077121    2634 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.695863   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077418    2634 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.695877   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077955    2634 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.695887   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.109084    2634 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.695901   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.135073    2634 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.695910   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137245    2634 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.695920   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137294    2634 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.695934   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.137340    2634 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.695942   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209773    2634 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.695952   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209976    2634 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.695962   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210050    2634 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.695975   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210361    2634 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.696001   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210374    2634 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.696011   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210392    2634 policy_none.go:50] "Start"
	I1229 06:55:00.696020   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210408    2634 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.696029   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210421    2634 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696038   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210527    2634 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696045   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210534    2634 policy_none.go:44] "Start"
	I1229 06:55:00.696056   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.219245    2634 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.696067   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220437    2634 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.696078   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220456    2634 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.696089   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.221071    2634 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.696114   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.226221    2634 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.696126   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239387    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696144   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239974    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696155   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.240381    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696165   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.262510    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696185   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283041    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696208   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283087    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696228   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283118    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696247   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283135    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696268   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283151    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696288   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283163    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696309   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283175    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696329   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283189    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696357   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283209    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696378   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283223    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696400   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283249    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696416   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.285713    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696428   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290012    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696442   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290269    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696454   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.304300    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696466   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.336817    2634 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.696475   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351321    2634 kubelet_node_status.go:123] "Node was previously registered" node="functional-695625"
	I1229 06:55:00.696486   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351415    2634 kubelet_node_status.go:77] "Successfully registered node" node="functional-695625"
	I1229 06:55:00.696493   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.033797    2634 apiserver.go:52] "Watching apiserver"
	I1229 06:55:00.696503   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.077546    2634 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1229 06:55:00.696527   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.181689    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-functional-695625" podStartSLOduration=3.181660018 podStartE2EDuration="3.181660018s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.180947341 +0000 UTC m=+1.223544146" watchObservedRunningTime="2025-12-29 06:52:42.181660018 +0000 UTC m=+1.224256834"
	I1229 06:55:00.696555   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.221952    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-functional-695625" podStartSLOduration=3.221936027 podStartE2EDuration="3.221936027s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.202120755 +0000 UTC m=+1.244717560" watchObservedRunningTime="2025-12-29 06:52:42.221936027 +0000 UTC m=+1.264532905"
	I1229 06:55:00.696583   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238774    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-695625" podStartSLOduration=3.238759924 podStartE2EDuration="3.238759924s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.238698819 +0000 UTC m=+1.281295638" watchObservedRunningTime="2025-12-29 06:52:42.238759924 +0000 UTC m=+1.281356744"
	I1229 06:55:00.696609   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238905    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-functional-695625" podStartSLOduration=3.238868136 podStartE2EDuration="3.238868136s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.224445467 +0000 UTC m=+1.267042290" watchObservedRunningTime="2025-12-29 06:52:42.238868136 +0000 UTC m=+1.281464962"
	I1229 06:55:00.696622   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266475    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696634   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266615    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696651   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266971    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696664   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.267487    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696678   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287234    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696690   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287316    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696704   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.292837    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696718   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293863    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696730   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293764    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696745   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.294163    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696757   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298557    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696770   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298633    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696782   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.272537    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696807   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273148    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696835   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273501    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696850   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273627    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696863   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279056    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696877   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279353    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696887   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.754123    2634 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1229 06:55:00.696899   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.756083    2634 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1229 06:55:00.696917   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.407560    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mg5\" (UniqueName: \"kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696938   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408503    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-proxy\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696958   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408957    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-xtables-lock\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696976   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.409131    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-lib-modules\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696991   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528153    2634 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697004   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528186    2634 projected.go:196] Error preparing data for projected volume kube-api-access-94mg5 for pod kube-system/kube-proxy-g7lp9: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697032   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528293    2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5 podName:9c2c2ac1-7fa0-427d-b78e-ee14e169895a nodeName:}" failed. No retries permitted until 2025-12-29 06:52:46.028266861 +0000 UTC m=+5.070863673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94mg5" (UniqueName: "kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5") pod "kube-proxy-g7lp9" (UID: "9c2c2ac1-7fa0-427d-b78e-ee14e169895a") : configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697044   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:55:00.697064   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697084   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697104   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697124   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697138   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:55:00.697151   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.697170   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697192   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697206   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697229   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:55:00.697245   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697268   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:55:00.697296   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:55:00.697310   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697322   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697336   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697361   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:55:00.697376   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.697388   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697402   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.697420   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697442   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697463   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:55:00.697483   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:55:00.697499   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697515   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697526   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697536   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697554   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697576   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697591   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:55:00.697608   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697621   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697637   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697651   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697669   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697710   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697726   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697746   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697762   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697775   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697790   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697815   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697830   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697846   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697860   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697876   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697892   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697906   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697923   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697937   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697951   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697966   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697986   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698002   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698029   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698046   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698060   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698081   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698097   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698113   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698126   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698141   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698155   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698172   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698187   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698202   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698218   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698235   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698274   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698293   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698309   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698325   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698341   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698362   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698378   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698395   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698408   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698424   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698439   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698455   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698469   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698484   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698501   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698514   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698527   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698541   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698554   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698577   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698590   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698606   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698620   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698634   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698650   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698666   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698682   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698696   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698711   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698727   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698743   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698756   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698769   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698784   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698808   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698823   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698840   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698853   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698868   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698886   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698903   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698916   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698933   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698948   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698962   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698976   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698993   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:55:00.699007   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.699018   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699031   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699042   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699067   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.699078   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699105   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699119   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699145   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699157   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:55:00.699195   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699207   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:55:00.699224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:55:00.699256   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699269   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699284   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.699330   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699343   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:55:00.699362   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699380   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.699407   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:55:00.699439   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:55:00.699460   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:55:00.699477   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699497   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699515   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:55:00.699533   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.699619   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699640   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699709   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699722   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.699738   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699750   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.699763   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699774   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.699785   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699807   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.699820   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699834   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699846   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699861   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699872   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699886   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699931   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699946   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699956   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:55:00.699972   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700008   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700031   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700053   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700067   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.700078   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.700091   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:55:00.700102   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700116   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.700129   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700139   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700159   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700168   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:55:00.700179   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.700190   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700199   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700217   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700228   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700240   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.700250   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:55:00.700268   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700281   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700291   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700310   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700321   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700331   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700349   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700364   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700375   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700394   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700405   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700415   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700427   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700454   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700474   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700515   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700529   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700539   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700558   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700570   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700578   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:55:00.700584   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:55:00.700590   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700597   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:55:00.700603   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700612   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:55:00.700620   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.700631   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:55:00.700641   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:55:00.700652   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:55:00.700662   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:55:00.700674   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700684   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:55:00.700696   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:55:00.700707   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:55:00.700717   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:55:00.700758   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:55:00.700770   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:55:00.700779   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:55:00.700790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:55:00.700816   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:55:00.700831   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:55:00.700846   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:55:00.700858   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:55:00.700866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:55:00.700879   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:55:00.700891   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:55:00.700905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:55:00.700912   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:55:00.700921   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:55:00.700932   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.700943   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:55:00.700951   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:55:00.700963   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:55:00.700971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:55:00.700986   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:55:00.701000   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:55:00.701008   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:55:00.701020   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.701037   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.701046   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:55:00.701061   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.701073   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.701082   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.701093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.701100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.701114   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.701124   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701143   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701170   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.701178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.701188   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.701201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.701210   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.701218   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:55:00.701226   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.701237   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701246   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701256   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:55:00.701266   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.701277   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.701287   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.701297   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.701308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.701322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701334   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701348   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701361   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.701372   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.701385   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.701399   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.701410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.701422   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.701433   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701447   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.701458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.701471   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.701483   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.701496   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.701508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.701521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.701533   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701550   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701567   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:55:00.701581   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701592   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701611   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:55:00.701625   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701642   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:55:00.701678   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:55:00.701695   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:55:00.701705   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701716   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701735   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:55:00.701749   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701764   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:55:00.701780   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:55:00.701807   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701827   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701847   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701867   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701886   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.701907   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701928   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701971   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701995   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702014   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702027   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.755255   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:55:00.755293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:55:00.771031   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:55:00.771066   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:55:00.771079   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:55:00.771088   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:55:00.771097   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:55:00.771103   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:55:00.771109   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:55:00.771116   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:55:00.771121   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:55:00.771126   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:55:00.771131   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:55:00.771136   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:55:00.771143   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771153   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:55:00.771158   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:55:00.771165   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771175   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771185   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771191   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:55:00.771196   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:55:00.771202   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:55:00.772218   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:55:00.772246   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:56:00.863293   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:56:00.863340   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.091082059s)
	W1229 06:56:00.863385   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:56:00.863402   17440 logs.go:123] Gathering logs for kube-apiserver [fb6db97d8ffe] ...
	I1229 06:56:00.863420   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6db97d8ffe"
	I1229 06:56:00.897112   17440 command_runner.go:130] ! I1229 06:53:50.588377       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:56:00.897142   17440 command_runner.go:130] ! I1229 06:53:50.597275       1 server.go:150] Version: v1.35.0
	I1229 06:56:00.897153   17440 command_runner.go:130] ! I1229 06:53:50.597323       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:00.897164   17440 command_runner.go:130] ! E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:56:00.898716   17440 logs.go:138] Found kube-apiserver [fb6db97d8ffe] problem: E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.898738   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:00.898750   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:00.935530   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938590   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:00.938653   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:00.938666   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:00.938679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938689   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:00.938712   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.938728   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:00.938838   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:00.938875   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:00.938892   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:00.938902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:00.938913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:00.938922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:00.938935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:00.938946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:00.938958   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:00.938969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:00.938978   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:00.938993   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:00.939003   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:00.939022   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:00.939035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:00.939046   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:00.939053   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:00.939062   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:00.939071   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:00.939081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:00.939091   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:00.939111   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:00.939126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:00.939142   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:00.939162   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.939181   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:00.939213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.939249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:00.939258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:00.939274   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:00.939289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:00.939302   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:00.939324   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:00.939342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.939352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:00.939362   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:00.939377   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:00.939389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:00.939404   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:00.939423   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:00.939439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:00.939458   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939467   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:00.939478   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:00.939528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939544   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939564   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:00.939586   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939603   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939616   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:00.939882   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:00.939915   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939932   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:00.939947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:00.939960   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:00.939998   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.940030   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940064   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940122   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940150   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.940167   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:00.940187   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940204   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940257   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940277   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:00.940301   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940334   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940371   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940425   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940447   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940473   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.955065   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955108   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:56:00.955188   17440 out.go:285] X Problems detected in kube-apiserver [fb6db97d8ffe]:
	X Problems detected in kube-apiserver [fb6db97d8ffe]:
	W1229 06:56:00.955202   17440 out.go:285]   E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	  E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.955209   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955215   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:56:10.957344   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:56:15.961183   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:56:15.961319   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:56:15.981705   17440 command_runner.go:130] > 18d0015c724a
	I1229 06:56:15.982641   17440 logs.go:282] 1 containers: [18d0015c724a]
	I1229 06:56:15.982732   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:56:16.002259   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:56:16.002292   17440 command_runner.go:130] > d81259f64136
	I1229 06:56:16.002322   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:56:16.002399   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:56:16.021992   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:56:16.022032   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:56:16.022113   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:56:16.048104   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:56:16.048133   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:56:16.049355   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:56:16.049441   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:56:16.071523   17440 command_runner.go:130] > 8911777281f4
	I1229 06:56:16.072578   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:56:16.072668   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:56:16.092921   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:56:16.092948   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:56:16.092975   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:56:16.093047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:56:16.113949   17440 logs.go:282] 0 containers: []
	W1229 06:56:16.113983   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:56:16.114047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:56:16.135700   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:56:16.135739   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:56:16.135766   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:56:16.135786   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:56:16.152008   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:56:16.152038   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:56:16.152046   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:56:16.152054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:56:16.152063   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:56:16.152069   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:56:16.152076   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:56:16.152081   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:56:16.152086   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:56:16.152091   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:56:16.152096   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:56:16.152102   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:56:16.152107   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152112   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:56:16.152119   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:56:16.152128   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152148   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152164   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152180   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:56:16.152190   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:56:16.152201   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:56:16.152209   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:56:16.152217   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:56:16.153163   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:56:16.153192   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:56:16.174824   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:56:16.174856   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.174862   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:56:16.174873   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:56:16.174892   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:56:16.174900   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:56:16.174913   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:56:16.174920   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:56:16.174924   17440 command_runner.go:130] !  >
	I1229 06:56:16.174931   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:56:16.174941   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:56:16.174957   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:56:16.174966   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:56:16.174975   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.174985   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:56:16.174994   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:56:16.175003   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:56:16.175012   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:56:16.175024   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:56:16.175033   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:56:16.175040   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:56:16.175050   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:56:16.175074   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:56:16.175325   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:56:16.175351   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:56:16.175362   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:56:16.177120   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:56:16.177144   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:56:16.222627   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:56:16.222665   17440 command_runner.go:130] > 18d0015c724a8       5c6acd67e9cd1       5 seconds ago       Exited              kube-apiserver            3                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:56:16.222684   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       16 seconds ago      Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222707   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       16 seconds ago      Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:56:16.222730   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       2 minutes ago       Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222749   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       2 minutes ago       Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:56:16.222768   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       2 minutes ago       Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:56:16.222810   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       2 minutes ago       Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222831   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       2 minutes ago       Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222851   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:56:16.222879   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       3 minutes ago       Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:56:16.225409   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:56:16.225439   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:56:16.247416   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247449   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.247516   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.247533   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.247545   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247555   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.247582   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.247605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.247698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.247722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:56:16.247733   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.247745   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:56:16.247753   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:56:16.247762   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.247774   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.247782   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.247807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:56:16.247816   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.247825   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.247840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.247851   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.247867   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.247878   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.247886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.247893   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.247902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.247914   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:56:16.247924   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:56:16.247935   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.247952   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.247965   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.247975   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.247988   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.248000   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.248016   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.248035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.248063   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.248074   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.248083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.248092   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.248103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.248113   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.248126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.248136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.248153   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.248166   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.248177   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:56:16.248186   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:56:16.248198   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.248208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248219   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:56:16.248228   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248240   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:56:16.248260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.248287   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248295   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248304   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.248312   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.248320   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248341   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.248352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.254841   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:16.254869   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:16.278647   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.278723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.278736   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.278750   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278759   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.278780   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.278809   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.278890   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.278913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:16.278923   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.278935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:16.278946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:16.278957   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.278971   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.278982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.278996   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:16.279006   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.279014   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.279031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.279040   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.279072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.279083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.279091   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.279101   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.279110   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.279121   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:16.279132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:16.279142   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.279159   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.279173   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.279183   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.279195   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.279226   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.279249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.279260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.279275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.279289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.279300   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.279313   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.279322   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279332   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.279343   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.279359   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.279374   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.279386   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:16.279396   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:16.279406   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:16.279418   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279429   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:16.279439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279451   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279460   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:16.279469   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279479   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.279503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279523   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.279531   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.279541   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279551   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.279562   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.279570   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:16.279585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.279603   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279622   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279661   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279676   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279688   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:16.279698   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279711   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279730   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279741   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:16.279751   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279764   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279785   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279805   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279825   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279836   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279852   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.287590   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:56:16.287613   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:56:16.310292   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:56:16.310320   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:56:16.331009   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:56:16.331044   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:56:16.331054   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:56:16.331067   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331076   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331083   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:56:16.331093   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331114   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:56:16.331232   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331256   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331268   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331275   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331289   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331298   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331316   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331329   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331341   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331355   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331363   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331374   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331386   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331400   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331413   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331425   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331441   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331454   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331468   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331478   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331488   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331496   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331506   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331519   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331529   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331537   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331547   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331555   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331564   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331572   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331580   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331592   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331604   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331618   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331629   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331645   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331659   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331673   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331689   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331703   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331716   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331728   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331740   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331756   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331771   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331784   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331816   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331830   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331847   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331863   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331879   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331894   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331908   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.336243   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:56:16.336267   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:56:16.358115   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358145   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358155   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358165   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358177   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.358186   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:56:16.358194   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:56:16.358203   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358209   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358220   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:56:16.358229   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358241   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358254   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358266   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358278   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358285   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358307   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358315   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358328   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358336   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358343   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:56:16.358350   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358360   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358369   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358377   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358385   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358399   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358408   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358415   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358425   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358436   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358445   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358455   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:56:16.358463   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358474   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358481   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358491   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358500   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358508   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358515   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358530   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.358543   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.358555   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358576   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358584   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.358593   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.358604   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:56:16.358614   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.358621   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.358628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.358635   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358644   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:56:16.358653   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358666   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358685   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358697   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358707   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358716   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358735   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358745   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358755   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358763   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358805   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358818   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358827   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358837   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358847   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358854   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358861   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358867   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.358874   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358893   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358904   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358913   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358921   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.358930   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358942   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358950   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358959   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358970   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358979   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358986   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358992   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359001   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359011   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:56:16.359021   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359029   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359036   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359042   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359052   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:56:16.359060   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359071   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359084   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359106   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359113   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359135   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359144   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:56:16.359154   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.359164   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.359172   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.359182   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.359190   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.359198   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.359206   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.359213   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.359244   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359260   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359275   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359288   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359300   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:56:16.359313   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359328   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359343   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359357   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359372   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359386   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359399   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359410   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359422   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.359435   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.359442   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359452   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359460   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:56:16.359468   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359474   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359481   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359487   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:56:16.359494   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359502   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:56:16.359511   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359521   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359532   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359544   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359553   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359561   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359574   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359590   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359602   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359617   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359630   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359646   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359660   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359676   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359689   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359706   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359719   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359731   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359744   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359763   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359779   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:56:16.359800   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:56:16.359813   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:56:16.359827   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:56:16.359837   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:56:16.359852   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:56:16.359864   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:56:16.359878   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:56:16.359890   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:56:16.359904   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:56:16.359916   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:56:16.359932   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:56:16.359945   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:56:16.359960   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:56:16.359975   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:56:16.359988   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:56:16.360003   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:56:16.360019   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:56:16.360037   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:56:16.360051   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:56:16.360064   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:56:16.360074   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:56:16.360085   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.360093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.360102   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.360113   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.360121   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.360130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.360163   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.360172   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.360189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:56:16.360197   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.360210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360218   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:56:16.360225   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360236   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.360245   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.360255   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.360263   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.360271   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.360280   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.360288   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.360297   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.360308   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.360317   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.360326   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360338   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360353   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360365   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360380   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360392   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360410   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360426   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:56:16.360441   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360454   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360467   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360482   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360494   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360510   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360525   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360538   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360553   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360566   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360582   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360599   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:56:16.360617   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360628   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:56:16.360643   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:56:16.360656   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360671   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360682   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:56:16.360699   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360711   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360726   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360736   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360749   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360762   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.377860   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:56:16.377891   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:56:16.394828   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:56:16.394877   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394896   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394920   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394952   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394976   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:56:16.394988   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.395012   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395045   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395075   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395109   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:56:16.395143   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395179   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:56:16.395221   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:56:16.395242   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395263   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395283   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395324   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:56:16.395347   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.395368   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395390   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.395423   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395451   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395496   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:56:16.395529   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:56:16.395551   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395578   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395597   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395618   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395641   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395678   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395702   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:56:16.395719   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395743   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395770   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395806   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395831   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395859   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395885   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395903   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395929   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.395956   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395981   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396000   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396027   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396052   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396087   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396114   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396138   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396161   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396186   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396267   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396295   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396315   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396339   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396362   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396387   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396413   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396433   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396458   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396484   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396508   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396526   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396550   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396585   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396609   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396634   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396662   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396687   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396711   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396739   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396763   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396786   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396821   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396848   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396872   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396891   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396919   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396943   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396966   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396989   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397016   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397064   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397089   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397114   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397139   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397161   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397187   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397211   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397233   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397256   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397281   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397307   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397330   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397358   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397387   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397424   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397450   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397477   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397500   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397521   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397544   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397571   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397594   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397618   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397644   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397668   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397686   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397742   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397766   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397786   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397818   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397849   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397872   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397897   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397918   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:56:16.397940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.397961   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.397984   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.398006   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398027   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.398047   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.398071   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398100   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398122   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398141   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398162   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398209   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398244   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:56:16.398272   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:56:16.398317   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398350   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:56:16.398371   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398394   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398413   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398456   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.398481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398498   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:56:16.398525   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398557   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.398599   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:56:16.398632   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:56:16.398661   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:56:16.398683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398746   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:56:16.398769   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.398813   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398843   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398873   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398910   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398942   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.398985   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399007   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.399028   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399052   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.399082   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399104   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.399121   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399145   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399170   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399191   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399209   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399231   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399253   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399275   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399295   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:56:16.399309   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399328   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.399366   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399402   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399416   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.399427   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.399440   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:56:16.399454   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399467   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.399491   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399517   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399553   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399565   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:56:16.399576   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.399588   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399598   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399618   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399629   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399640   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.399653   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:56:16.399671   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399684   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399694   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399724   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399741   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399752   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399771   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399782   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399801   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399822   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399834   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399845   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399857   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399866   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399885   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399928   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.400087   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.400109   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.400130   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.400140   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400147   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:56:16.400153   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:56:16.400162   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400169   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:56:16.400175   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400184   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:56:16.400193   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.400201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:56:16.400213   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:56:16.400222   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:56:16.400233   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:56:16.400243   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400253   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:56:16.400262   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:56:16.400272   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:56:16.400281   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:56:16.400693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:56:16.400713   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:56:16.400724   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:56:16.400734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:56:16.400742   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:56:16.400751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:56:16.400760   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:56:16.400768   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:56:16.400780   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:56:16.400812   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:56:16.400833   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:56:16.400853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:56:16.400868   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:56:16.400877   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:56:16.400887   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.400896   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:56:16.400903   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:56:16.400915   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:56:16.400924   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:56:16.400936   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:56:16.400950   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:56:16.400961   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:56:16.400972   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.400985   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:56:16.400993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:56:16.401003   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:56:16.401016   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:56:16.401027   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:56:16.401036   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:56:16.401045   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:56:16.401053   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:56:16.401070   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:56:16.401083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.401100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401132   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:56:16.401141   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:56:16.401150   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:56:16.401160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:56:16.401173   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:56:16.401180   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:56:16.401189   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:56:16.401198   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401209   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401217   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:56:16.401228   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:56:16.401415   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:56:16.401435   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:56:16.401444   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:56:16.401456   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:56:16.401467   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401486   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401529   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.401553   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.401575   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.401589   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.401602   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.401614   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.401628   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401640   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.401653   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.401667   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.401679   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.401693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.401706   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.401720   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.401733   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401745   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.401762   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:56:16.401816   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401840   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401871   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:56:16.401900   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401920   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:56:16.401958   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.401977   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.401987   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402002   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402019   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:56:16.402033   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402048   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:56:16.402065   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402085   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402107   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402134   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402169   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402204   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.402228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402250   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402272   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402294   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402314   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402335   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402349   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402367   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:56:16.402405   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402421   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402433   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402444   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402530   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402557   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402569   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402585   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402600   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402639   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402655   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402666   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402677   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402697   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.402714   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402726   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402737   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402752   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:56:16.402917   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.402934   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402947   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.402959   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.402972   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.402996   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403011   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403026   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403043   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403056   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403070   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403082   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403096   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403110   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403125   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403138   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403152   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403292   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403310   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403325   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403339   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403361   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403376   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403389   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403402   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403417   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403428   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403450   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403464   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.403480   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403495   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403506   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403636   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403671   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403686   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403702   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403720   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403739   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403753   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403767   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403780   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403806   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403820   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403833   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403850   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403871   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403890   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403914   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403936   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403952   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.403976   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403994   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404007   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.404022   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404034   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.404046   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.404066   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.404085   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.404122   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.454878   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:56:16.454917   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:56:16.478085   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.478126   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:56:16.478136   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:56:16.478148   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:56:16.478155   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:56:16.478166   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.478175   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:56:16.478185   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.478194   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:56:16.478203   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.478825   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:56:16.478843   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:56:16.501568   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.501592   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.501601   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.501610   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.501623   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.501630   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.501636   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.501982   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:56:16.501996   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:56:16.524487   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.524514   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.524523   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.524767   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.524788   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.524805   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.524812   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.526406   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:56:16.526437   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:57:16.604286   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:57:16.606268   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.079810784s)
	W1229 06:57:16.606306   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:57:16.606317   17440 logs.go:123] Gathering logs for kube-apiserver [18d0015c724a] ...
	I1229 06:57:16.606331   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d0015c724a"
	I1229 06:57:16.636305   17440 command_runner.go:130] ! Error response from daemon: No such container: 18d0015c724a
	W1229 06:57:16.636367   17440 logs.go:130] failed kube-apiserver [18d0015c724a]: command: /bin/bash -c "docker logs --tail 400 18d0015c724a" /bin/bash -c "docker logs --tail 400 18d0015c724a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 18d0015c724a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 18d0015c724a
	
	** /stderr **
	I1229 06:57:16.636376   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:57:16.636391   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:57:16.657452   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:19.160135   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:57:24.162053   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:57:24.162161   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:57:24.182182   17440 command_runner.go:130] > b206d555ad19
	I1229 06:57:24.183367   17440 logs.go:282] 1 containers: [b206d555ad19]
	I1229 06:57:24.183464   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:57:24.206759   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:57:24.206821   17440 command_runner.go:130] > d81259f64136
	I1229 06:57:24.206853   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:57:24.206926   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:57:24.228856   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:57:24.228897   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:57:24.228968   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:57:24.247867   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:57:24.247890   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:57:24.249034   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:57:24.249130   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:57:24.268209   17440 command_runner.go:130] > 8911777281f4
	I1229 06:57:24.269160   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:57:24.269243   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:57:24.288837   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:57:24.288871   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:57:24.290245   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:57:24.290337   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:57:24.312502   17440 logs.go:282] 0 containers: []
	W1229 06:57:24.312531   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:57:24.312592   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:57:24.334811   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:57:24.334849   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:57:24.334875   17440 logs.go:123] Gathering logs for kube-apiserver [b206d555ad19] ...
	I1229 06:57:24.334888   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b206d555ad19"
	I1229 06:57:24.357541   17440 command_runner.go:130] ! I1229 06:57:22.434262       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:57:24.357567   17440 command_runner.go:130] ! I1229 06:57:22.436951       1 server.go:150] Version: v1.35.0
	I1229 06:57:24.357577   17440 command_runner.go:130] ! I1229 06:57:22.436991       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.357602   17440 command_runner.go:130] ! E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:57:24.359181   17440 logs.go:138] Found kube-apiserver [b206d555ad19] problem: E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:57:24.359206   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:57:24.359218   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:57:24.381077   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:24.381103   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:57:24.381113   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.381121   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:57:24.381131   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.381137   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:57:24.381144   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:57:24.382680   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:57:24.382711   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:57:24.427354   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:57:24.427382   17440 command_runner.go:130] > b206d555ad194       5c6acd67e9cd1       2 seconds ago        Exited              kube-apiserver            5                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:57:24.427400   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427411   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       About a minute ago   Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:57:24.427421   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       3 minutes ago        Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427441   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       3 minutes ago        Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:57:24.427454   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       3 minutes ago        Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:57:24.427465   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       3 minutes ago        Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427477   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       3 minutes ago        Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427488   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:57:24.427509   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       4 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:57:24.430056   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:57:24.430095   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:57:24.453665   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453712   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453738   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453770   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453809   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453838   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453867   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453891   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453911   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453928   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453945   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453961   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453974   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454002   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454022   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454058   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454074   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454087   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454103   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454120   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454135   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454149   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454165   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454179   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454194   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454208   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454224   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454246   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454262   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454276   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454294   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454310   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454326   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454342   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454358   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454371   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454386   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454401   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454423   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454447   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454472   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454500   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454519   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454533   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454549   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454565   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454579   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454593   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454608   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454625   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454640   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454655   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:57:24.454667   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.454680   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.454697   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.454714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.454729   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.454741   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.454816   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454842   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454855   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454870   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454881   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.454896   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454912   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:57:24.454957   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454969   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:57:24.454987   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455012   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:57:24.455025   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.455039   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455081   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455097   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.455110   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:57:24.455125   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455144   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.455165   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:57:24.455186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:57:24.455204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:57:24.455224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455275   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:57:24.455294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455326   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455345   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455366   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455386   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455404   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.455423   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455446   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.455472   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455490   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.455506   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455528   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.455550   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455573   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455588   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455603   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455615   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.455628   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455640   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455657   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455669   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:57:24.455681   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455699   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.455720   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455739   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455750   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.455810   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.455823   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:57:24.455835   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455848   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.455860   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455872   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.455892   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455904   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:57:24.455916   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.455930   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455967   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.455990   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456008   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456019   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.456031   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:57:24.456052   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456067   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.456078   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.456100   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.456114   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456124   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456144   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456159   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456169   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456191   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456205   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456216   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456229   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456239   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456260   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456304   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.456318   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456331   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456352   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456364   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456372   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:57:24.456379   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:57:24.456386   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456396   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:57:24.456406   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456423   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:57:24.456441   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.456458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:57:24.456472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:57:24.456487   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:57:24.456503   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:57:24.456520   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456540   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:57:24.456560   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:57:24.456573   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:57:24.456584   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:57:24.456626   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:57:24.456639   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:57:24.456647   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:57:24.456657   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:57:24.456665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:57:24.456676   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:57:24.456685   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:57:24.456695   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:57:24.456703   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:57:24.456714   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:57:24.456726   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:57:24.456739   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:57:24.456748   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:57:24.456761   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:57:24.456771   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.456782   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:57:24.456790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:57:24.456811   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:57:24.456821   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:57:24.456832   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:57:24.456845   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:57:24.456853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:57:24.456866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456875   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:57:24.456885   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:57:24.456893   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:57:24.456907   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:57:24.456918   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:57:24.456927   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:57:24.456937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:57:24.456947   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:57:24.456959   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:57:24.456971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456990   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457011   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457023   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:57:24.457032   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:57:24.457044   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:57:24.457054   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:57:24.457067   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:57:24.457074   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:57:24.457083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:57:24.457093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457105   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457112   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:57:24.457125   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:57:24.457133   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:57:24.457145   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:57:24.457154   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:57:24.457168   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:57:24.457178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457192   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457205   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457220   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.457235   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.457247   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.457258   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.457271   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.457284   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.457299   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457310   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.457322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.457333   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.457345   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.457359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.457370   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.457381   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.457396   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457436   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:57:24.457460   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457481   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457500   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:57:24.457515   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457533   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:57:24.457586   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.457604   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.457613   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457633   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457649   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:57:24.457664   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457680   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:57:24.457697   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.457717   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457740   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457763   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457785   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457817   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.457904   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457927   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457976   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457996   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458019   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458034   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458050   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:57:24.458090   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458106   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458116   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458130   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458141   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458158   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458170   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458184   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458198   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458263   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458295   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458316   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458339   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458367   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.458389   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.458409   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458429   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458447   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:57:24.458468   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.458490   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458512   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458529   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458542   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458572   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458587   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458602   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458617   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458632   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458644   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458659   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458674   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458686   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458702   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458717   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458732   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458746   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458762   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458777   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458790   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458824   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458839   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458852   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458865   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458879   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458889   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458911   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458925   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458939   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458952   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458964   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458983   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458998   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459016   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459031   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459048   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459062   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.459090   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459104   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459118   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459132   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459145   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459158   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459174   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459186   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459201   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459215   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459225   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459247   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459261   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459274   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459286   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459302   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459314   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459334   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459352   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459392   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.459418   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.459438   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.459461   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459483   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459502   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459515   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459537   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459552   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459567   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459579   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459599   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459613   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459634   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459649   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459662   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459676   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459693   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459707   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459720   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459741   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459753   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459769   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459782   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459806   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459829   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459845   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459859   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459875   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459897   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459911   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459924   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459938   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459950   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459972   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459987   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460000   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460016   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460036   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460053   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.460094   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.460108   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460124   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.460136   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.460148   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460162   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.460182   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460195   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460206   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460221   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460236   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.460254   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460269   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460283   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460296   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460308   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460321   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460334   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460347   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460367   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460381   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460395   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460414   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460450   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460499   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.513870   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:57:24.513913   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:57:24.542868   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.542904   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:57:24.542974   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:57:24.542992   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:57:24.543020   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.543037   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:57:24.543067   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543085   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:57:24.543199   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:57:24.543237   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:57:24.543258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:57:24.543276   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:57:24.543291   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:57:24.543306   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:57:24.543327   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:57:24.543344   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:57:24.543365   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:57:24.543380   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:57:24.543393   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:57:24.543419   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:57:24.543437   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:57:24.543464   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:57:24.543483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:57:24.543499   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:57:24.543511   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:57:24.543561   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:57:24.543585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:57:24.543605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:57:24.543623   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:57:24.543659   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:57:24.543680   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:57:24.543701   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:57:24.543722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.543744   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:57:24.543770   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543821   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:57:24.543840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:57:24.543865   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:57:24.543886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:57:24.543908   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:57:24.543927   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:57:24.543945   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.543962   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:57:24.543980   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:57:24.544010   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:57:24.544031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:57:24.544065   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:57:24.544084   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:57:24.544103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:57:24.544120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:57:24.544157   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544176   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544193   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:57:24.544213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544224   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:57:24.544264   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544298   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:57:24.544314   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:57:24.544331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544345   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:57:24.544364   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:57:24.544381   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:57:24.544405   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.544430   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544465   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544517   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544537   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.544554   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:57:24.544575   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544595   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544623   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544641   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:57:24.544662   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544683   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544711   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544730   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544767   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544828   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.552509   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:57:24.552540   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:57:24.575005   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:57:24.575036   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:57:24.597505   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597545   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597560   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597577   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597596   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.597610   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:57:24.597628   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:57:24.597642   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597654   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.597667   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:57:24.597682   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.597705   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.597733   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.597753   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.597765   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.597773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:57:24.597803   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.597814   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:57:24.597825   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.597834   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.597841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:57:24.597848   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597856   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.597866   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.597874   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.597883   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.597900   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.597909   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.597916   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597925   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597936   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597944   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597953   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:57:24.597960   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.597973   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.597981   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.597991   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.597999   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598010   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598017   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598029   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598041   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598054   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598067   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598074   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598084   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598095   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:57:24.598104   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598111   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598117   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598126   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598132   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:57:24.598141   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598154   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598174   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598186   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598196   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598205   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598224   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598235   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598246   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598256   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598264   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598273   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598289   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598297   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598306   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598314   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598320   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.598327   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598334   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598345   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.598354   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.598365   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.598373   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.598381   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.598389   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.598400   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.598415   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.598431   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598447   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598463   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598476   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598492   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598503   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:57:24.598513   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598522   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598531   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598538   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598545   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:57:24.598555   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598578   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598591   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598602   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598613   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598621   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598642   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598653   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598664   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598674   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598683   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598693   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598701   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598716   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598724   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598732   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598760   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598774   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598787   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598815   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598832   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:57:24.598845   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598860   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598873   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598889   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598904   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598918   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598933   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598946   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598958   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598973   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598980   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598989   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598999   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:57:24.599008   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.599015   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.599022   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.599030   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:57:24.599036   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.599043   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:57:24.599054   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.599065   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.599077   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.599088   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.599099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.599107   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:57:24.599120   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599138   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599151   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599168   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599185   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599198   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599228   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599257   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599270   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599285   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599297   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599319   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.599331   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:57:24.599346   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:57:24.599359   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:57:24.599376   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:57:24.599387   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:57:24.599405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:57:24.599423   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:57:24.599452   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:57:24.599472   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:57:24.599489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:57:24.599503   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:57:24.599517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:57:24.599529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:57:24.599544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:57:24.599559   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:57:24.599572   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:57:24.599587   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:57:24.599602   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:57:24.599615   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:57:24.599631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:57:24.599644   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:57:24.599654   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:57:24.599664   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.599673   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.599682   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.599692   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.599700   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.599710   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.599747   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.599756   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.599772   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:57:24.599782   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599789   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.599806   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599814   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:57:24.599822   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599830   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.599841   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.599849   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.599860   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.599868   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.599879   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.599886   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.599896   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.599907   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.599914   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.599922   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599934   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599953   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599970   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599983   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600000   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600017   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600034   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:57:24.600049   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600063   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600079   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600092   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600107   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600121   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600137   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600152   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600164   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600177   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600190   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600207   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:57:24.600223   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600235   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:57:24.600247   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:57:24.600261   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600276   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600288   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:57:24.600304   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600317   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600331   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600345   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600357   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600373   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600386   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600403   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.600423   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:57:24.600448   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600472   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600490   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.619075   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:57:24.619123   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:58:24.700496   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:58:24.700542   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.081407425s)
	W1229 06:58:24.700578   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:58:24.700591   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:58:24.700607   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:58:24.726206   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726238   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:58:24.726283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:58:24.726296   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:58:24.726311   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726321   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:58:24.726342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726358   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:58:24.726438   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:58:24.726461   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:58:24.726472   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:58:24.726483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:58:24.726492   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:58:24.726503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:58:24.726517   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:58:24.726528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:58:24.726540   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:58:24.726552   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:58:24.726560   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:58:24.726577   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:58:24.726590   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:58:24.726607   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:58:24.726618   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:58:24.726629   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:58:24.726636   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:58:24.726647   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:58:24.726657   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:58:24.726670   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:58:24.726680   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:58:24.726698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:58:24.726711   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:58:24.726723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:58:24.726735   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:58:24.726750   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:58:24.726765   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:58:24.726784   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726826   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:58:24.726839   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:58:24.726848   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:58:24.726858   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:58:24.726870   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:58:24.726883   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:58:24.726896   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:58:24.726906   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:58:24.726922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:58:24.726935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:58:24.726947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:58:24.726956   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:58:24.726969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:58:24.726982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.726997   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:58:24.727009   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727020   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.727029   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:58:24.727039   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727056   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:58:24.727064   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:58:24.727089   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:58:24.727100   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727109   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:58:24.727132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:58:24.733042   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:58:24.733064   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:58:24.755028   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.755231   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:58:24.755256   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:58:24.776073   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:58:24.776109   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.776120   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:58:24.776135   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:58:24.776154   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:58:24.776162   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:58:24.776180   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:58:24.776188   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:58:24.776195   17440 command_runner.go:130] !  >
	I1229 06:58:24.776212   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:58:24.776224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:58:24.776249   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:58:24.776257   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:58:24.776266   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.776282   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:58:24.776296   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:58:24.776307   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:58:24.776328   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:58:24.776350   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:58:24.776366   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:58:24.776376   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:58:24.776388   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:58:24.776404   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:58:24.776420   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:58:24.776439   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:58:24.776453   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:58:24.778558   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:24.778595   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:24.793983   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:24.794025   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:24.794040   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:24.794054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:24.794069   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:24.794079   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:24.794096   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:24.794106   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:24.794117   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:24.794125   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:24.794136   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:24.794146   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:24.794160   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794167   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:24.794178   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:24.794186   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794196   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794207   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794215   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:24.794221   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:24.794229   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:24.794241   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:24.794252   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:24.794260   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.794271   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.795355   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:58:24.795387   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:58:24.820602   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.820635   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:58:24.820646   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:58:24.820657   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:58:24.820665   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:58:24.820672   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.820681   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:58:24.820692   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.820698   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:58:24.820705   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.822450   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:58:24.822473   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:58:24.844122   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.844156   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:58:24.844170   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.844184   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:58:24.844201   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:24.844210   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:58:24.844218   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.845429   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:58:24.845453   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:58:24.867566   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:58:24.867597   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:58:24.867607   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:58:24.867615   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867622   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867633   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:58:24.867653   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867681   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:58:24.867694   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867704   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867719   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867734   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867750   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867763   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867817   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867836   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867848   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867859   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867871   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867883   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867891   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867901   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867914   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867926   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867944   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867956   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867972   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867982   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867997   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868013   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868028   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868048   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868063   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868071   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868081   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868098   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868111   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868127   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868140   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868153   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868164   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868177   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868192   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868207   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868221   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868236   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868247   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868258   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868275   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868290   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868304   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868320   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868332   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868342   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868358   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868373   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868385   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868400   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868414   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868425   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868438   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.872821   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872842   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:58:24.872901   17440 out.go:285] X Problems detected in kube-apiserver [b206d555ad19]:
	X Problems detected in kube-apiserver [b206d555ad19]:
	W1229 06:58:24.872915   17440 out.go:285]   E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	  E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:58:24.872919   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872923   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:58:34.875381   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:58:39.877679   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:39.877779   17440 kubeadm.go:602] duration metric: took 4m48.388076341s to restartPrimaryControlPlane
	W1229 06:58:39.877879   17440 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1229 06:58:39.877946   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:39.930050   17440 command_runner.go:130] ! W1229 06:58:39.921577    8187 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:49.935089   17440 command_runner.go:130] ! W1229 06:58:49.926653    8187 reset.go:141] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:49.935131   17440 command_runner.go:130] ! W1229 06:58:49.926754    8187 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:50.998307   17440 command_runner.go:130] > [reset] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I1229 06:58:50.998341   17440 command_runner.go:130] > [reset] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
	I1229 06:58:50.998348   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:50.998357   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/minikube/etcd
	I1229 06:58:50.998366   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:50.998372   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:50.998386   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:50.998407   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:50.998417   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:50.998428   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:50.998434   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:50.998442   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:50.998458   17440 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (11.120499642s)
	I1229 06:58:50.998527   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.015635   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:58:51.028198   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.040741   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.040780   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.040811   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.040826   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040865   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040877   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.040925   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.051673   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052090   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052155   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.064755   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.076455   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076517   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076577   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.088881   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.099253   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099652   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099710   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.111487   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.122532   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122905   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122972   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.135143   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.355420   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355450   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355543   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.355556   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.355615   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355625   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355790   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.355837   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.356251   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356265   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356317   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.356324   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.357454   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357471   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357544   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.357561   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	W1229 06:58:51.357680   17440 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 06:58:51.357753   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:51.401004   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.401036   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I1229 06:58:51.401047   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:51.408535   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:51.413813   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:51.415092   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:51.415117   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:51.415128   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:51.415137   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:51.415145   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:51.415645   17440 command_runner.go:130] ! W1229 06:58:51.391426    8625 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:51.415670   17440 command_runner.go:130] ! W1229 06:58:51.392518    8625 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:51.415739   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.432316   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.444836   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.444860   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.444867   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.444874   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445417   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445435   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.445485   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.457038   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457099   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457146   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.469980   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.480965   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481435   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481498   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.493408   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.504342   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504404   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504468   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.516567   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.526975   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527475   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527532   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.539365   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.587038   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587068   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587108   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.587113   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.738880   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738912   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738963   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.738975   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.739029   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739038   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739157   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739166   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739271   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739294   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739348   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739355   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739406   17440 kubeadm.go:403] duration metric: took 5m0.289116828s to StartCluster
	I1229 06:58:51.739455   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 06:58:51.739507   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 06:58:51.776396   17440 cri.go:96] found id: ""
	I1229 06:58:51.776420   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.776428   17440 logs.go:284] No container was found matching "kube-apiserver"
	I1229 06:58:51.776434   17440 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 06:58:51.776522   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 06:58:51.808533   17440 cri.go:96] found id: ""
	I1229 06:58:51.808556   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.808563   17440 logs.go:284] No container was found matching "etcd"
	I1229 06:58:51.808570   17440 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 06:58:51.808625   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 06:58:51.841860   17440 cri.go:96] found id: ""
	I1229 06:58:51.841887   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.841894   17440 logs.go:284] No container was found matching "coredns"
	I1229 06:58:51.841900   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 06:58:51.841955   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 06:58:51.875485   17440 cri.go:96] found id: ""
	I1229 06:58:51.875512   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.875520   17440 logs.go:284] No container was found matching "kube-scheduler"
	I1229 06:58:51.875526   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 06:58:51.875576   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 06:58:51.909661   17440 cri.go:96] found id: ""
	I1229 06:58:51.909699   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.909712   17440 logs.go:284] No container was found matching "kube-proxy"
	I1229 06:58:51.909720   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 06:58:51.909790   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 06:58:51.943557   17440 cri.go:96] found id: ""
	I1229 06:58:51.943594   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.943607   17440 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 06:58:51.943616   17440 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 06:58:51.943685   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 06:58:51.979189   17440 cri.go:96] found id: ""
	I1229 06:58:51.979219   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.979228   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:58:51.979234   17440 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 06:58:51.979285   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 06:58:52.013436   17440 cri.go:96] found id: ""
	I1229 06:58:52.013472   17440 logs.go:282] 0 containers: []
	W1229 06:58:52.013482   17440 logs.go:284] No container was found matching "storage-provisioner"
	I1229 06:58:52.013494   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:58:52.013507   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:58:52.030384   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.030429   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.030454   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030506   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030530   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030550   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030574   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030601   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030643   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:58:52.030670   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030694   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:58:52.030721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.030757   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:58:52.030787   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030826   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030853   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030893   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.030921   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.030943   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:58:52.030981   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.031015   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.031053   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:58:52.031087   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:58:52.031117   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:58:52.031146   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031223   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:58:52.031253   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031281   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031311   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031347   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031383   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031422   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031445   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.031467   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031491   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.031516   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031538   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.031562   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031584   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.031606   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031628   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031651   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031673   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031695   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.031717   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031763   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031786   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:58:52.031824   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031855   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.031894   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031949   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.031981   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.032005   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.032025   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:58:52.032048   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032069   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.032093   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032112   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032150   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032170   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:58:52.032192   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.032214   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032234   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032269   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032290   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032314   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.032335   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:58:52.032371   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032395   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032414   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032452   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032473   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032495   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032530   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032552   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032573   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032608   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032631   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032655   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032676   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032696   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032735   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032819   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.032845   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032864   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032899   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032919   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.032935   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.032948   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.032960   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.032981   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:58:52.032995   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.033012   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:58:52.033029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:52.033042   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:58:52.033062   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:58:52.033080   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:58:52.033101   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:58:52.033120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.033138   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:58:52.033166   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:58:52.033187   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:58:52.033206   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:58:52.033274   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:58:52.033294   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:58:52.033309   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:58:52.033326   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:58:52.033343   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:58:52.033359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:58:52.033378   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:58:52.033398   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:58:52.033413   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:58:52.033431   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:58:52.033453   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:58:52.033476   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:58:52.033492   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:58:52.033507   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:58:52.033526   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033542   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:58:52.033559   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:58:52.033609   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:58:52.033625   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:58:52.033642   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:58:52.033665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:58:52.033681   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:58:52.033700   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033718   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:58:52.033734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:58:52.033751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:58:52.033776   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:58:52.033808   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:58:52.033826   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:58:52.033840   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:58:52.033855   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:58:52.033878   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:58:52.033905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033974   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:58:52.034010   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:58:52.034030   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:58:52.034050   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:58:52.034084   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:58:52.034099   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:58:52.034116   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:58:52.034134   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034152   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034167   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:58:52.034186   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:58:52.034203   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:58:52.034224   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:58:52.034241   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:58:52.034265   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:58:52.034286   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034332   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034358   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.034380   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.034404   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.034427   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.034450   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.034472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.034499   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.034544   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.034566   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.034588   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.034611   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.034633   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.034655   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:58:52.034678   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034697   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.034724   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:58:52.034749   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034771   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034819   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:58:52.034843   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034873   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:58:52.034936   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.034963   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.034993   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035018   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035049   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:58:52.035071   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035099   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:58:52.035126   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.035159   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035194   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035263   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035299   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.035333   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035368   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035408   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035445   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035477   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035512   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035534   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035563   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:58:52.035631   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035658   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035677   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035699   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035720   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035749   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035771   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035814   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035838   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035902   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035927   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035947   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035978   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036010   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.036038   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.036061   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036082   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036102   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:58:52.036121   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.036141   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036165   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036190   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036212   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036251   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036275   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036299   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036323   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036345   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036369   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036393   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036418   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036441   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036464   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036488   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036511   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036536   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036561   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036584   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036606   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036642   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036664   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036687   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036711   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036734   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036754   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036806   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036895   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036922   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036945   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036973   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037009   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037032   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037052   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037098   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037122   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037144   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.037168   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037189   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037212   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037235   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037254   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037278   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037303   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037325   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037348   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037372   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037392   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037424   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037449   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037472   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037497   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037518   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037539   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037574   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037604   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.037669   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.037694   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.037713   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.037734   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.037760   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037784   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037816   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037851   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037875   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037897   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037917   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037950   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037981   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038011   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038035   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038059   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038079   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038102   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038125   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038147   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038182   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038203   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038223   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038243   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038260   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038297   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038321   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038344   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038365   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038401   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038423   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038449   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038471   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038491   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038526   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038549   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038585   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038608   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038643   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038670   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.038735   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.038758   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038785   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.038817   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.038842   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038869   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.038900   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038922   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038946   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038977   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039003   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.039034   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039059   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039082   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039102   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039126   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039149   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039171   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039191   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039227   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039252   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039275   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039295   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039330   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039396   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039419   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.302837    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039444   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.341968    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039468   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342033    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039488   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: I1229 06:57:30.342050    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039523   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342233    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039550   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.608375    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.039576   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186377    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039598   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186459    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.039675   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188187    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039700   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188267    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.039715   17440 command_runner.go:130] > Dec 29 06:57:37 functional-695625 kubelet[6517]: I1229 06:57:37.010219    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.039749   17440 command_runner.go:130] > Dec 29 06:57:38 functional-695625 kubelet[6517]: E1229 06:57:38.741770    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039773   17440 command_runner.go:130] > Dec 29 06:57:40 functional-695625 kubelet[6517]: E1229 06:57:40.303258    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039808   17440 command_runner.go:130] > Dec 29 06:57:50 functional-695625 kubelet[6517]: E1229 06:57:50.304120    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039837   17440 command_runner.go:130] > Dec 29 06:57:55 functional-695625 kubelet[6517]: E1229 06:57:55.743031    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.039903   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 kubelet[6517]: E1229 06:57:58.103052    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.272240811 +0000 UTC m=+0.287990191,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039929   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.304627    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039954   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432518    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.039991   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432667    6517 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)
	I1229 06:58:52.040014   17440 command_runner.go:130] > Dec 29 06:58:10 functional-695625 kubelet[6517]: E1229 06:58:10.305485    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040037   17440 command_runner.go:130] > Dec 29 06:58:11 functional-695625 kubelet[6517]: E1229 06:58:11.012407    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.040068   17440 command_runner.go:130] > Dec 29 06:58:12 functional-695625 kubelet[6517]: E1229 06:58:12.743824    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040086   17440 command_runner.go:130] > Dec 29 06:58:18 functional-695625 kubelet[6517]: I1229 06:58:18.014210    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.040107   17440 command_runner.go:130] > Dec 29 06:58:20 functional-695625 kubelet[6517]: E1229 06:58:20.306630    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040127   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186554    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040149   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186719    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.040176   17440 command_runner.go:130] > Dec 29 06:58:29 functional-695625 kubelet[6517]: E1229 06:58:29.745697    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040195   17440 command_runner.go:130] > Dec 29 06:58:30 functional-695625 kubelet[6517]: E1229 06:58:30.307319    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040256   17440 command_runner.go:130] > Dec 29 06:58:32 functional-695625 kubelet[6517]: E1229 06:58:32.105206    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.286010652 +0000 UTC m=+0.301760032,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.040279   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184790    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040300   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184918    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040319   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: I1229 06:58:39.184949    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040354   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040377   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040397   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.040413   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040433   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040455   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040477   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040498   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040520   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040538   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040576   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040596   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040619   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040640   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040658   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040692   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040711   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040729   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040741   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040764   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040784   17440 command_runner.go:130] > Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040807   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.040815   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.040821   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.040830   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	I1229 06:58:52.093067   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:52.093106   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:52.108863   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:52.108898   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:52.108912   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:52.108925   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:52.108937   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:52.108945   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:52.108951   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:52.108957   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:52.108962   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:52.108971   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:52.108975   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:52.108980   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:52.108992   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.108997   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:52.109006   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:52.109011   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.109021   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109031   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109036   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:52.109043   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:52.109048   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:52.109055   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:52.109062   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:52.109067   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109072   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109080   17440 command_runner.go:130] > [Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109088   17440 command_runner.go:130] > [  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109931   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:58:52.109946   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:59:52.193646   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:59:52.193695   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.083736259s)
	W1229 06:59:52.193730   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:59:52.193743   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:59:52.193757   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:59:52.211424   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211464   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.211503   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.211519   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.211538   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:59:52.211555   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:59:52.211569   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:59:52.211587   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211601   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.211612   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:59:52.211630   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.211652   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.211672   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.211696   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.211714   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.211730   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:59:52.211773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.211790   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:59:52.211824   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.211841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.211855   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:59:52.211871   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211884   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.211899   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.211913   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.211926   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.211948   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.211959   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.211970   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211984   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212011   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212025   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212039   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:59:52.212064   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212079   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212093   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212108   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212125   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212139   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212172   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.212192   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.212215   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212237   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212252   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212266   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212285   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:59:52.212301   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.212316   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.212331   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.212341   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.212357   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:59:52.212372   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.212392   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.212423   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.212444   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.212461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.212477   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:59:52.212512   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.212529   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:59:52.212547   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.212562   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.212577   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.212594   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.212612   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.212628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.212643   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.212656   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.212671   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212684   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.212699   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212714   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212732   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212751   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212767   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212783   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.212808   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212827   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212844   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212864   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212899   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212916   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212932   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212949   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212974   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:59:52.212995   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213006   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213020   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213033   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213055   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:59:52.213073   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213115   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213135   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213153   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213169   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213204   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.213221   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:59:52.213242   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.213258   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.213275   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.213291   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.213308   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.213321   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.213334   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.213348   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.213387   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213414   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213440   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213465   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213486   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:59:52.213507   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213528   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213549   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213573   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213595   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213616   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213637   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213655   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213675   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.213697   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.213709   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.213724   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.213735   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:59:52.213749   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213759   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213774   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213786   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:59:52.213809   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213822   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:59:52.213839   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213856   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213874   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213891   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213907   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213920   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213942   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213963   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213985   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214006   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214028   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214055   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214078   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214122   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214144   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214166   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214190   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214211   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214242   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.214258   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:59:52.214283   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:59:52.214298   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:59:52.214323   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:59:52.214341   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:59:52.214365   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:59:52.214380   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:59:52.214405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:59:52.214421   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:59:52.214447   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:59:52.214464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:59:52.214489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:59:52.214506   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:59:52.214531   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:59:52.214553   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:59:52.214576   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:59:52.214600   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:59:52.214623   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:59:52.214646   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:59:52.214668   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:59:52.214690   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:59:52.214703   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:59:52.214721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.214735   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.214748   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.214762   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.214775   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.214788   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.215123   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.215148   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.215180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:59:52.215194   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.215222   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215233   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:59:52.215247   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215265   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.215283   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.215299   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.215312   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.215324   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.215340   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.215355   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.215372   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.215389   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.215401   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.215409   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215430   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215454   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215478   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215500   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215517   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215532   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215549   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:59:52.215565   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215578   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215593   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215606   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215622   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215643   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215667   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215688   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215712   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215762   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215839   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:59:52.215868   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215888   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:59:52.215912   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:59:52.215937   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215959   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215979   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:59:52.216007   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216027   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216051   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216067   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216084   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216097   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216112   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216128   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216141   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216157   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216171   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216195   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216222   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216243   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216263   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216276   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216289   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 dockerd[4014]: time="2025-12-29T06:58:43.458641345Z" level=info msg="ignoring event" container=07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216304   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.011072219Z" level=info msg="ignoring event" container=173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216318   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.102126666Z" level=info msg="ignoring event" container=6b7711ee25a2df71f8c7d296f7186875ebd6ab978a71d33f177de0cc3055645b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216331   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.266578298Z" level=info msg="ignoring event" container=a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216346   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.365376654Z" level=info msg="ignoring event" container=fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216365   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.452640794Z" level=info msg="ignoring event" container=4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216380   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.557330204Z" level=info msg="ignoring event" container=d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216392   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.666151542Z" level=info msg="ignoring event" container=0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216409   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.751481082Z" level=info msg="ignoring event" container=f48fc04e347519b276e239ee9a6b0b8e093862313e46174a1815efae670eec9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216427   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	I1229 06:59:52.216440   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	I1229 06:59:52.216455   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216467   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216484   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	I1229 06:59:52.216495   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	I1229 06:59:52.216512   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	I1229 06:59:52.216525   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	I1229 06:59:52.216542   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:59:52.216554   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	I1229 06:59:52.216568   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:59:52.216582   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	I1229 06:59:52.216596   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216611   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216628   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	I1229 06:59:52.216642   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	I1229 06:59:52.216660   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:59:52.216673   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	I1229 06:59:52.238629   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:59:52.238668   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:59:52.287732   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	W1229 06:59:52.290016   17440 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 06:59:52.290080   17440 out.go:285] * 
	* 
	W1229 06:59:52.290145   17440 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.290156   17440 out.go:285] * 
	* 
	W1229 06:59:52.290452   17440 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:59:52.293734   17440 out.go:203] 
	W1229 06:59:52.295449   17440 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.295482   17440 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 06:59:52.295500   17440 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 06:59:52.296904   17440 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-695625 --alsologtostderr -v=8": exit status 81
functional_test.go:678: soft start took 6m30.726594975s for "functional-695625" cluster.
I1229 06:59:52.921439   13486 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.838291196s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m1.113583499s)
helpers_test.go:261: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ addons-909246 ssh cat /opt/local-path-provisioner/pvc-60e48b23-4f43-4f44-8576-c979927d0800_default_test-pvc/file1 │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:50 UTC │
	│ addons  │ addons-909246 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:50 UTC │
	│ addons  │ addons-909246 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:50 UTC │
	│ addons  │ addons-909246 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:51 UTC │
	│ stop    │ -p addons-909246                                                                                                  │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ addons  │ enable dashboard -p addons-909246                                                                                 │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ addons  │ disable dashboard -p addons-909246                                                                                │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ addons  │ disable gvisor -p addons-909246                                                                                   │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ delete  │ -p addons-909246                                                                                                  │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ start   │ -p nospam-039815 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-039815 --driver=kvm2                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ start   │ nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run                                                        │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │                     │
	│ start   │ nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run                                                        │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │                     │
	│ start   │ nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run                                                        │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │                     │
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:52 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ delete  │ -p nospam-039815                                                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ start   │ -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:53 UTC │
	│ start   │ -p functional-695625 --alsologtostderr -v=8                                                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:53:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:53:22.250786   17440 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:53:22.251073   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251082   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:53:22.251087   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251322   17440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 06:53:22.251807   17440 out.go:368] Setting JSON to false
	I1229 06:53:22.252599   17440 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2152,"bootTime":1766989050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:53:22.252669   17440 start.go:143] virtualization: kvm guest
	I1229 06:53:22.254996   17440 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:53:22.256543   17440 notify.go:221] Checking for updates...
	I1229 06:53:22.256551   17440 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:53:22.258115   17440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:53:22.259464   17440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:22.260823   17440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:53:22.262461   17440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:53:22.263830   17440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:53:22.265499   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.265604   17440 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:53:22.301877   17440 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 06:53:22.303062   17440 start.go:309] selected driver: kvm2
	I1229 06:53:22.303099   17440 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.303255   17440 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:53:22.304469   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:22.304541   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:22.304607   17440 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.304716   17440 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:53:22.306617   17440 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 06:53:22.307989   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:22.308028   17440 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 06:53:22.308037   17440 cache.go:65] Caching tarball of preloaded images
	I1229 06:53:22.308172   17440 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 06:53:22.308185   17440 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 06:53:22.308288   17440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 06:53:22.308499   17440 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 06:53:22.308543   17440 start.go:364] duration metric: took 25.28µs to acquireMachinesLock for "functional-695625"
	I1229 06:53:22.308555   17440 start.go:96] Skipping create...Using existing machine configuration
	I1229 06:53:22.308560   17440 fix.go:54] fixHost starting: 
	I1229 06:53:22.310738   17440 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 06:53:22.310765   17440 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 06:53:22.313927   17440 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 06:53:22.313960   17440 machine.go:94] provisionDockerMachine start ...
	I1229 06:53:22.317184   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317690   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.317748   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317941   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.318146   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.318156   17440 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 06:53:22.424049   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.424102   17440 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 06:53:22.427148   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427685   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.427715   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427957   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.428261   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.428280   17440 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 06:53:22.552563   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.555422   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.555807   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.555834   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.556061   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.556278   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.556302   17440 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 06:53:22.661438   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:22.661470   17440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 06:53:22.661505   17440 buildroot.go:174] setting up certificates
	I1229 06:53:22.661529   17440 provision.go:84] configureAuth start
	I1229 06:53:22.664985   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.665439   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.665459   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.667758   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668124   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.668145   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668257   17440 provision.go:143] copyHostCerts
	I1229 06:53:22.668280   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668308   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 06:53:22.668317   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668383   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 06:53:22.668476   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668505   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 06:53:22.668512   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668541   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 06:53:22.668582   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668598   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 06:53:22.668603   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668632   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 06:53:22.668676   17440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 06:53:22.746489   17440 provision.go:177] copyRemoteCerts
	I1229 06:53:22.746545   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 06:53:22.749128   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749596   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.749616   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749757   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:22.836885   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 06:53:22.836959   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 06:53:22.872390   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 06:53:22.872481   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 06:53:22.908829   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 06:53:22.908896   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 06:53:22.941014   17440 provision.go:87] duration metric: took 279.457536ms to configureAuth
	I1229 06:53:22.941053   17440 buildroot.go:189] setting minikube options for container-runtime
	I1229 06:53:22.941277   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.944375   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.944857   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.944916   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.945128   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.945387   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.945402   17440 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 06:53:23.052106   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 06:53:23.052136   17440 buildroot.go:70] root file system type: tmpfs
	I1229 06:53:23.052304   17440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 06:53:23.055887   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056416   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.056446   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056629   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.056893   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.056961   17440 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 06:53:23.183096   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 06:53:23.186465   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.186943   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.187006   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.187227   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.187475   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.187494   17440 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 06:53:23.306011   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:23.306077   17440 machine.go:97] duration metric: took 992.109676ms to provisionDockerMachine
	I1229 06:53:23.306099   17440 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 06:53:23.306114   17440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 06:53:23.306201   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 06:53:23.309537   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.309944   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.309967   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.310122   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.393657   17440 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 06:53:23.398689   17440 command_runner.go:130] > NAME=Buildroot
	I1229 06:53:23.398723   17440 command_runner.go:130] > VERSION=2025.02
	I1229 06:53:23.398731   17440 command_runner.go:130] > ID=buildroot
	I1229 06:53:23.398737   17440 command_runner.go:130] > VERSION_ID=2025.02
	I1229 06:53:23.398745   17440 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1229 06:53:23.398791   17440 info.go:137] Remote host: Buildroot 2025.02
	I1229 06:53:23.398821   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 06:53:23.398897   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 06:53:23.398981   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 06:53:23.398993   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 06:53:23.399068   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 06:53:23.399075   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> /etc/test/nested/copy/13486/hosts
	I1229 06:53:23.399114   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 06:53:23.412045   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:23.445238   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 06:53:23.479048   17440 start.go:296] duration metric: took 172.930561ms for postStartSetup
	I1229 06:53:23.479099   17440 fix.go:56] duration metric: took 1.170538464s for fixHost
	I1229 06:53:23.482307   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.482761   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.482808   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.483049   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.483313   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.483327   17440 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 06:53:23.586553   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766991203.580410695
	
	I1229 06:53:23.586572   17440 fix.go:216] guest clock: 1766991203.580410695
	I1229 06:53:23.586579   17440 fix.go:229] Guest: 2025-12-29 06:53:23.580410695 +0000 UTC Remote: 2025-12-29 06:53:23.479103806 +0000 UTC m=+1.278853461 (delta=101.306889ms)
	I1229 06:53:23.586594   17440 fix.go:200] guest clock delta is within tolerance: 101.306889ms
	I1229 06:53:23.586598   17440 start.go:83] releasing machines lock for "functional-695625", held for 1.278049275s
	I1229 06:53:23.590004   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.590438   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.590463   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.591074   17440 ssh_runner.go:195] Run: cat /version.json
	I1229 06:53:23.591186   17440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 06:53:23.594362   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594454   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594831   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.594868   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594954   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.595021   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.595083   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.595278   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.692873   17440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1229 06:53:23.692948   17440 command_runner.go:130] > {"iso_version": "v1.37.0-1766979747-22353", "kicbase_version": "v0.0.48-1766884053-22351", "minikube_version": "v1.37.0", "commit": "f5189b2bdbb6990e595e25e06a017f8901d29fa8"}
	I1229 06:53:23.693063   17440 ssh_runner.go:195] Run: systemctl --version
	I1229 06:53:23.700357   17440 command_runner.go:130] > systemd 256 (256.7)
	I1229 06:53:23.700393   17440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1229 06:53:23.700501   17440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 06:53:23.707230   17440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1229 06:53:23.707369   17440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 06:53:23.707433   17440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 06:53:23.719189   17440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 06:53:23.719220   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:23.719246   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:23.719351   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:23.744860   17440 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1229 06:53:23.744940   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 06:53:23.758548   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 06:53:23.773051   17440 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 06:53:23.773122   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 06:53:23.786753   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.800393   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 06:53:23.813395   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.826600   17440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 06:53:23.840992   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 06:53:23.854488   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 06:53:23.869084   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 06:53:23.882690   17440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 06:53:23.894430   17440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1229 06:53:23.894542   17440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 06:53:23.912444   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:24.139583   17440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 06:53:24.191402   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:24.191457   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:24.191521   17440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 06:53:24.217581   17440 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1229 06:53:24.217604   17440 command_runner.go:130] > [Unit]
	I1229 06:53:24.217609   17440 command_runner.go:130] > Description=Docker Application Container Engine
	I1229 06:53:24.217615   17440 command_runner.go:130] > Documentation=https://docs.docker.com
	I1229 06:53:24.217626   17440 command_runner.go:130] > After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1229 06:53:24.217631   17440 command_runner.go:130] > Wants=network-online.target containerd.service
	I1229 06:53:24.217635   17440 command_runner.go:130] > Requires=docker.socket
	I1229 06:53:24.217638   17440 command_runner.go:130] > StartLimitBurst=3
	I1229 06:53:24.217642   17440 command_runner.go:130] > StartLimitIntervalSec=60
	I1229 06:53:24.217646   17440 command_runner.go:130] > [Service]
	I1229 06:53:24.217649   17440 command_runner.go:130] > Type=notify
	I1229 06:53:24.217653   17440 command_runner.go:130] > Restart=always
	I1229 06:53:24.217660   17440 command_runner.go:130] > ExecStart=
	I1229 06:53:24.217694   17440 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1229 06:53:24.217710   17440 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1229 06:53:24.217748   17440 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1229 06:53:24.217761   17440 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1229 06:53:24.217767   17440 command_runner.go:130] > LimitNOFILE=infinity
	I1229 06:53:24.217782   17440 command_runner.go:130] > LimitNPROC=infinity
	I1229 06:53:24.217790   17440 command_runner.go:130] > LimitCORE=infinity
	I1229 06:53:24.217818   17440 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1229 06:53:24.217828   17440 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1229 06:53:24.217833   17440 command_runner.go:130] > TasksMax=infinity
	I1229 06:53:24.217840   17440 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1229 06:53:24.217847   17440 command_runner.go:130] > Delegate=yes
	I1229 06:53:24.217855   17440 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1229 06:53:24.217864   17440 command_runner.go:130] > KillMode=process
	I1229 06:53:24.217871   17440 command_runner.go:130] > OOMScoreAdjust=-500
	I1229 06:53:24.217881   17440 command_runner.go:130] > [Install]
	I1229 06:53:24.217896   17440 command_runner.go:130] > WantedBy=multi-user.target
	I1229 06:53:24.217973   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.255457   17440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 06:53:24.293449   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.313141   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 06:53:24.332090   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:24.359168   17440 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1229 06:53:24.359453   17440 ssh_runner.go:195] Run: which cri-dockerd
	I1229 06:53:24.364136   17440 command_runner.go:130] > /usr/bin/cri-dockerd
	I1229 06:53:24.364255   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 06:53:24.377342   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 06:53:24.400807   17440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 06:53:24.632265   17440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 06:53:24.860401   17440 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 06:53:24.860544   17440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 06:53:24.885002   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 06:53:24.902479   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:25.138419   17440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 06:53:48.075078   17440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.936617903s)
	I1229 06:53:48.075181   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 06:53:48.109404   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 06:53:48.160259   17440 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 06:53:48.213352   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:48.231311   17440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 06:53:48.408709   17440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 06:53:48.584722   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.754219   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 06:53:48.798068   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 06:53:48.815248   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.983637   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 06:53:49.117354   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:49.139900   17440 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 06:53:49.139985   17440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 06:53:49.146868   17440 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1229 06:53:49.146900   17440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1229 06:53:49.146910   17440 command_runner.go:130] > Device: 0,23	Inode: 2092        Links: 1
	I1229 06:53:49.146918   17440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1229 06:53:49.146926   17440 command_runner.go:130] > Access: 2025-12-29 06:53:49.121969518 +0000
	I1229 06:53:49.146933   17440 command_runner.go:130] > Modify: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146940   17440 command_runner.go:130] > Change: 2025-12-29 06:53:49.012958222 +0000
	I1229 06:53:49.146947   17440 command_runner.go:130] >  Birth: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146986   17440 start.go:574] Will wait 60s for crictl version
	I1229 06:53:49.147040   17440 ssh_runner.go:195] Run: which crictl
	I1229 06:53:49.152717   17440 command_runner.go:130] > /usr/bin/crictl
	I1229 06:53:49.152823   17440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 06:53:49.184154   17440 command_runner.go:130] > Version:  0.1.0
	I1229 06:53:49.184179   17440 command_runner.go:130] > RuntimeName:  docker
	I1229 06:53:49.184183   17440 command_runner.go:130] > RuntimeVersion:  28.5.2
	I1229 06:53:49.184188   17440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1229 06:53:49.184211   17440 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 06:53:49.184266   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.212414   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.213969   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.257526   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.262261   17440 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 06:53:49.266577   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267255   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:49.267298   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267633   17440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 06:53:49.286547   17440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1229 06:53:49.286686   17440 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 06:53:49.286896   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:49.286965   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.324994   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.325029   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.325037   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.325045   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.325052   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.325060   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.325067   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.325074   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.325113   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.325127   17440 docker.go:624] Images already preloaded, skipping extraction
	I1229 06:53:49.325191   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.352256   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.352294   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.352301   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.352309   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.352315   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.352323   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.352349   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.352361   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.352398   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.352412   17440 cache_images.go:86] Images are preloaded, skipping loading
	I1229 06:53:49.352427   17440 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 06:53:49.352542   17440 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 06:53:49.352611   17440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 06:53:49.466471   17440 command_runner.go:130] > systemd
	I1229 06:53:49.469039   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:49.469084   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:49.469108   17440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 06:53:49.469137   17440 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 06:53:49.469275   17440 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 06:53:49.469338   17440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 06:53:49.495545   17440 command_runner.go:130] > kubeadm
	I1229 06:53:49.495573   17440 command_runner.go:130] > kubectl
	I1229 06:53:49.495580   17440 command_runner.go:130] > kubelet
	I1229 06:53:49.495602   17440 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 06:53:49.495647   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 06:53:49.521658   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 06:53:49.572562   17440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 06:53:49.658210   17440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1229 06:53:49.740756   17440 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 06:53:49.746333   17440 command_runner.go:130] > 192.168.39.121	control-plane.minikube.internal
	I1229 06:53:49.746402   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:50.073543   17440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:53:50.148789   17440 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 06:53:50.148837   17440 certs.go:195] generating shared ca certs ...
	I1229 06:53:50.148860   17440 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:53:50.149082   17440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 06:53:50.149152   17440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 06:53:50.149169   17440 certs.go:257] generating profile certs ...
	I1229 06:53:50.149320   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 06:53:50.149413   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 06:53:50.149478   17440 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 06:53:50.149490   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 06:53:50.149508   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 06:53:50.149525   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 06:53:50.149541   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 06:53:50.149556   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 06:53:50.149573   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 06:53:50.149588   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 06:53:50.149607   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 06:53:50.149673   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 06:53:50.149723   17440 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 06:53:50.149738   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 06:53:50.149776   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 06:53:50.149837   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 06:53:50.149873   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 06:53:50.149950   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:50.150003   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:50.150023   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 06:53:50.150038   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 06:53:50.150853   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 06:53:50.233999   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 06:53:50.308624   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 06:53:50.436538   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 06:53:50.523708   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 06:53:50.633239   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 06:53:50.746852   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 06:53:50.793885   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 06:53:50.894956   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 06:53:50.955149   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 06:53:51.018694   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 06:53:51.084938   17440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 06:53:51.127238   17440 ssh_runner.go:195] Run: openssl version
	I1229 06:53:51.136812   17440 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1229 06:53:51.136914   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.154297   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 06:53:51.175503   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182560   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182600   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182653   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.195355   17440 command_runner.go:130] > b5213941
	I1229 06:53:51.195435   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 06:53:51.217334   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.233542   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 06:53:51.248778   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255758   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255826   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255874   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.272983   17440 command_runner.go:130] > 51391683
	I1229 06:53:51.273077   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 06:53:51.303911   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.325828   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 06:53:51.347788   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360429   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360567   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360625   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.369235   17440 command_runner.go:130] > 3ec20f2e
	I1229 06:53:51.369334   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 06:53:51.381517   17440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387517   17440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387548   17440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1229 06:53:51.387554   17440 command_runner.go:130] > Device: 253,1	Inode: 1052441     Links: 1
	I1229 06:53:51.387560   17440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1229 06:53:51.387568   17440 command_runner.go:130] > Access: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387572   17440 command_runner.go:130] > Modify: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387577   17440 command_runner.go:130] > Change: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387581   17440 command_runner.go:130] >  Birth: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387657   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 06:53:51.396600   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.397131   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 06:53:51.410180   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.410283   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 06:53:51.419062   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.419164   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 06:53:51.431147   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.431222   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 06:53:51.441881   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.442104   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 06:53:51.450219   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.450295   17440 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:51.450396   17440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.474716   17440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 06:53:51.489086   17440 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1229 06:53:51.489107   17440 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1229 06:53:51.489113   17440 command_runner.go:130] > /var/lib/minikube/etcd:
	I1229 06:53:51.489117   17440 command_runner.go:130] > member
	I1229 06:53:51.489676   17440 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 06:53:51.489694   17440 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 06:53:51.489753   17440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 06:53:51.503388   17440 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:51.503948   17440 kubeconfig.go:125] found "functional-695625" server: "https://192.168.39.121:8441"
	I1229 06:53:51.504341   17440 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:51.504505   17440 kapi.go:59] client config for functional-695625: &rest.Config{Host:"https://192.168.39.121:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 06:53:51.504963   17440 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 06:53:51.504986   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 06:53:51.504992   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 06:53:51.504998   17440 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 06:53:51.505004   17440 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 06:53:51.505012   17440 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 06:53:51.505089   17440 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 06:53:51.505414   17440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 06:53:51.521999   17440 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.121
	I1229 06:53:51.522047   17440 kubeadm.go:1161] stopping kube-system containers ...
	I1229 06:53:51.522115   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.550376   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.550407   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:51.550415   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:51.550422   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:51.550432   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:51.550441   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:51.550448   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:51.550455   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:51.550462   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:51.550470   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:51.550478   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:51.550485   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:51.550491   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:51.550496   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:51.550505   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:51.550511   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:51.550516   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:51.550521   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:51.550528   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:51.550540   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:51.550548   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:51.550556   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:51.550566   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:51.550572   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:51.550582   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:51.552420   17440 docker.go:487] Stopping containers: [6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629]
	I1229 06:53:51.552499   17440 ssh_runner.go:195] Run: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629
	I1229 06:53:51.976888   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.976911   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:58.789216   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:58.789240   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:58.789248   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:58.789252   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:58.789256   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:58.789259   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:58.789262   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:58.789266   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:58.789269   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:58.789272   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:58.789275   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:58.789278   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:58.789281   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:58.789284   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:58.789287   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:58.789295   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:58.789299   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:58.789303   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:58.789306   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:58.789310   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:58.789314   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:58.789317   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:58.789321   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:58.790986   17440 ssh_runner.go:235] Completed: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629: (7.238443049s)
	I1229 06:53:58.791057   17440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 06:53:58.833953   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857522   17440 command_runner.go:130] > -rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	I1229 06:53:58.857550   17440 command_runner.go:130] > -rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.857561   17440 command_runner.go:130] > -rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.857571   17440 command_runner.go:130] > -rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857610   17440 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	
	I1229 06:53:58.857671   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:53:58.875294   17440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1229 06:53:58.876565   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.896533   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.896617   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:53:58.917540   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.936703   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.936777   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.957032   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.970678   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.970742   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:53:58.992773   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:53:59.007767   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.061402   17440 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 06:53:59.061485   17440 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1229 06:53:59.061525   17440 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1229 06:53:59.061923   17440 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 06:53:59.062217   17440 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1229 06:53:59.062329   17440 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1229 06:53:59.062606   17440 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1229 06:53:59.062852   17440 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1229 06:53:59.062948   17440 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1229 06:53:59.063179   17440 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 06:53:59.063370   17440 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 06:53:59.063615   17440 command_runner.go:130] > [certs] Using the existing "sa" key
	I1229 06:53:59.066703   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.686012   17440 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 06:53:59.686050   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1229 06:53:59.686059   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1229 06:53:59.686069   17440 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 06:53:59.686078   17440 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 06:53:59.686087   17440 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 06:53:59.686203   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.995495   17440 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 06:53:59.995529   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 06:53:59.995539   17440 command_runner.go:130] > [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 06:53:59.995545   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 06:53:59.995549   17440 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1229 06:53:59.995615   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.047957   17440 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 06:54:00.047983   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 06:54:00.053966   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 06:54:00.056537   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 06:54:00.059558   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.175745   17440 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 06:54:00.175825   17440 api_server.go:52] waiting for apiserver process to appear ...
	I1229 06:54:00.175893   17440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:00.233895   17440 command_runner.go:130] > 2416
	I1229 06:54:00.233940   17440 api_server.go:72] duration metric: took 58.126409ms to wait for apiserver process to appear ...
	I1229 06:54:00.233953   17440 api_server.go:88] waiting for apiserver healthz status ...
	I1229 06:54:00.233976   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:05.236821   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:05.236865   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:10.239922   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:10.239956   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:15.242312   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:15.242347   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:20.245667   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:20.245726   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:25.248449   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:25.248501   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:30.249241   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:30.249279   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:35.251737   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:35.251771   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:40.254366   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:40.254407   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:45.257232   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:45.257275   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:50.259644   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:50.259685   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:55.261558   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:55.261592   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:55:00.263123   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:55:00.263241   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:55:00.287429   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:55:00.288145   17440 logs.go:282] 1 containers: [fb6db97d8ffe]
	I1229 06:55:00.288289   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:55:00.310519   17440 command_runner.go:130] > d81259f64136
	I1229 06:55:00.310561   17440 logs.go:282] 1 containers: [d81259f64136]
	I1229 06:55:00.310630   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:55:00.334579   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:55:00.334624   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:55:00.334692   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:55:00.353472   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:55:00.353503   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:55:00.354626   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:55:00.354714   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:55:00.376699   17440 command_runner.go:130] > 8911777281f4
	I1229 06:55:00.378105   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:55:00.378188   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:55:00.397976   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:55:00.399617   17440 logs.go:282] 1 containers: [17fe16a2822a]
	I1229 06:55:00.399707   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:55:00.419591   17440 logs.go:282] 0 containers: []
	W1229 06:55:00.419617   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:55:00.419665   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:55:00.440784   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:55:00.441985   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:55:00.442020   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:55:00.442030   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:55:00.465151   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.465192   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:55:00.465226   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.465237   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:55:00.465255   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.465271   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:55:00.465285   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.465823   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:55:00.465845   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:55:00.487618   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:55:00.487646   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:55:00.508432   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.508468   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:55:00.508482   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:55:00.508508   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:55:00.508521   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:55:00.508529   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.508541   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:55:00.508551   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.508560   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:55:00.508568   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.510308   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:55:00.510337   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:55:00.531862   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.532900   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:55:00.532924   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:55:00.554051   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:55:00.554084   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.554095   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:55:00.554109   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:55:00.554131   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:55:00.554148   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:55:00.554170   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:55:00.554189   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:55:00.554195   17440 command_runner.go:130] !  >
	I1229 06:55:00.554208   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:55:00.554224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:55:00.554250   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:55:00.554261   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:55:00.554273   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.554316   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:55:00.554327   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:55:00.554339   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:55:00.554350   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:55:00.554366   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:55:00.554381   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:55:00.554390   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:55:00.554402   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:55:00.554414   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:55:00.554427   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:55:00.554437   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:55:00.554452   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:55:00.556555   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:55:00.556578   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:55:00.581812   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:55:00.581848   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:55:00.581857   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:55:00.581865   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581874   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581881   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:55:00.581890   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581911   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:55:00.581919   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581930   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581942   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581949   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581957   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581964   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581975   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581985   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581993   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582003   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582010   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582020   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582030   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582037   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582044   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582051   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582070   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582080   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582088   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582097   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582105   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582115   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582125   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582141   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582152   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582160   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582170   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582177   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582186   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582193   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582203   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582211   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582221   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582228   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582235   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582242   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582252   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582261   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582269   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582276   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582287   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582294   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582302   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582312   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582319   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582329   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582336   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582346   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582353   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582363   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582370   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582378   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582385   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.586872   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:55:00.586916   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:55:00.609702   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.609731   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.609766   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.609784   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.609811   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.609822   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:55:00.609831   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:55:00.609842   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609848   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.609857   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:55:00.609865   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.609879   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.609890   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.609906   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.609915   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.609923   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:55:00.609943   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.609954   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:55:00.609966   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.609976   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.609983   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:55:00.609990   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609998   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610006   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610016   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610024   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610041   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610050   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610070   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610082   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610091   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610100   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610107   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:55:00.610115   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610123   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610131   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610141   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610159   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610168   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610179   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.610191   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.610203   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610216   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610223   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610231   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610242   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:55:00.610251   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610258   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610265   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610271   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:55:00.610290   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610303   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610323   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610335   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610345   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610355   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610374   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610384   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610394   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610404   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610412   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610422   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610429   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610439   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610447   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610455   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610470   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.610476   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610483   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610491   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610500   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610508   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610516   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.610523   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610531   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610538   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610550   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610559   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610567   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610573   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610579   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610595   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610607   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:55:00.610615   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610622   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610630   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610637   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610644   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:55:00.610653   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610669   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610680   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610692   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610705   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610713   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610735   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610744   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610755   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610765   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610772   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610781   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610789   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610809   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610818   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610824   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610853   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610867   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610881   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610896   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610909   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:55:00.610922   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610936   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610949   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610964   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610979   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.610995   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611010   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611021   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611037   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.611048   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.611062   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.611070   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.611079   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:55:00.611087   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.611096   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.611102   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.611109   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:55:00.611118   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.611125   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:55:00.611135   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.611146   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.611157   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.611167   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.611179   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.611186   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:55:00.611199   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611226   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611266   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611281   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611295   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611310   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611325   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611342   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611355   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611370   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611382   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611404   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.611417   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:55:00.611435   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:55:00.611449   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:55:00.611464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:55:00.611476   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:55:00.611491   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:55:00.611502   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:55:00.611517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:55:00.611529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:55:00.611544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:55:00.611558   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:55:00.611574   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:55:00.611586   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:55:00.611601   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:55:00.611617   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:55:00.611631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:55:00.611645   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:55:00.611660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:55:00.611674   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:55:00.611689   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:55:00.611702   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:55:00.611712   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:55:00.611722   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.611732   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.611740   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.611751   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.611759   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.611767   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.611835   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.611849   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.611867   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:55:00.611877   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611888   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.611894   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.611901   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:55:00.611909   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611917   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.611929   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.611937   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.611946   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.611954   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.611963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.611971   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.611981   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.611990   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.611999   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.612006   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.612019   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612031   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612046   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612063   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612079   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612093   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612112   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:55:00.612128   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612142   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612157   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612171   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612185   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612201   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612217   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612230   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612245   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612259   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612274   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612293   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:55:00.612309   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612323   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:55:00.612338   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:55:00.612354   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612366   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612380   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:55:00.612394   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.612407   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:55:00.629261   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:55:00.629293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:55:00.671242   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:55:00.671279   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       About a minute ago   Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671293   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:55:00.671303   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       About a minute ago   Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:55:00.671315   17440 command_runner.go:130] > fb6db97d8ffe4       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            1                   4ed2797334771       kube-apiserver-functional-695625            kube-system
	I1229 06:55:00.671327   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:55:00.671337   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       About a minute ago   Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671347   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:55:00.671362   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       2 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:55:00.673604   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:55:00.673628   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:55:00.695836   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077121    2634 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.695863   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077418    2634 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.695877   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077955    2634 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.695887   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.109084    2634 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.695901   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.135073    2634 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.695910   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137245    2634 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.695920   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137294    2634 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.695934   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.137340    2634 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.695942   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209773    2634 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.695952   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209976    2634 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.695962   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210050    2634 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.695975   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210361    2634 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.696001   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210374    2634 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.696011   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210392    2634 policy_none.go:50] "Start"
	I1229 06:55:00.696020   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210408    2634 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.696029   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210421    2634 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696038   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210527    2634 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696045   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210534    2634 policy_none.go:44] "Start"
	I1229 06:55:00.696056   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.219245    2634 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.696067   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220437    2634 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.696078   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220456    2634 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.696089   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.221071    2634 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.696114   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.226221    2634 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.696126   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239387    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696144   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239974    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696155   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.240381    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696165   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.262510    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696185   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283041    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696208   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283087    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696228   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283118    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696247   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283135    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696268   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283151    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696288   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283163    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696309   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283175    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696329   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283189    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696357   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283209    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696378   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283223    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696400   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283249    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696416   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.285713    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696428   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290012    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696442   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290269    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696454   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.304300    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696466   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.336817    2634 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.696475   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351321    2634 kubelet_node_status.go:123] "Node was previously registered" node="functional-695625"
	I1229 06:55:00.696486   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351415    2634 kubelet_node_status.go:77] "Successfully registered node" node="functional-695625"
	I1229 06:55:00.696493   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.033797    2634 apiserver.go:52] "Watching apiserver"
	I1229 06:55:00.696503   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.077546    2634 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1229 06:55:00.696527   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.181689    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-functional-695625" podStartSLOduration=3.181660018 podStartE2EDuration="3.181660018s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.180947341 +0000 UTC m=+1.223544146" watchObservedRunningTime="2025-12-29 06:52:42.181660018 +0000 UTC m=+1.224256834"
	I1229 06:55:00.696555   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.221952    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-functional-695625" podStartSLOduration=3.221936027 podStartE2EDuration="3.221936027s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.202120755 +0000 UTC m=+1.244717560" watchObservedRunningTime="2025-12-29 06:52:42.221936027 +0000 UTC m=+1.264532905"
	I1229 06:55:00.696583   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238774    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-695625" podStartSLOduration=3.238759924 podStartE2EDuration="3.238759924s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.238698819 +0000 UTC m=+1.281295638" watchObservedRunningTime="2025-12-29 06:52:42.238759924 +0000 UTC m=+1.281356744"
	I1229 06:55:00.696609   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238905    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-functional-695625" podStartSLOduration=3.238868136 podStartE2EDuration="3.238868136s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.224445467 +0000 UTC m=+1.267042290" watchObservedRunningTime="2025-12-29 06:52:42.238868136 +0000 UTC m=+1.281464962"
	I1229 06:55:00.696622   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266475    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696634   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266615    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696651   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266971    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696664   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.267487    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696678   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287234    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696690   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287316    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696704   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.292837    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696718   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293863    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696730   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293764    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696745   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.294163    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696757   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298557    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696770   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298633    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696782   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.272537    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696807   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273148    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696835   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273501    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696850   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273627    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696863   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279056    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696877   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279353    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696887   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.754123    2634 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1229 06:55:00.696899   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.756083    2634 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1229 06:55:00.696917   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.407560    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mg5\" (UniqueName: \"kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696938   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408503    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-proxy\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696958   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408957    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-xtables-lock\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696976   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.409131    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-lib-modules\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696991   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528153    2634 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697004   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528186    2634 projected.go:196] Error preparing data for projected volume kube-api-access-94mg5 for pod kube-system/kube-proxy-g7lp9: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697032   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528293    2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5 podName:9c2c2ac1-7fa0-427d-b78e-ee14e169895a nodeName:}" failed. No retries permitted until 2025-12-29 06:52:46.028266861 +0000 UTC m=+5.070863673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94mg5" (UniqueName: "kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5") pod "kube-proxy-g7lp9" (UID: "9c2c2ac1-7fa0-427d-b78e-ee14e169895a") : configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697044   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:55:00.697064   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697084   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697104   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697124   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697138   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:55:00.697151   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.697170   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697192   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697206   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697229   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:55:00.697245   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697268   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:55:00.697296   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:55:00.697310   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697322   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697336   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697361   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:55:00.697376   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.697388   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697402   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.697420   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697442   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697463   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:55:00.697483   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:55:00.697499   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697515   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697526   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697536   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697554   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697576   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697591   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:55:00.697608   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697621   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697637   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697651   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697669   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697710   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697726   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697746   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697762   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697775   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697790   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697815   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697830   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697846   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697860   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697876   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697892   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697906   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697923   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697937   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697951   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697966   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697986   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698002   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698029   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698046   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698060   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698081   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698097   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698113   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698126   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698141   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698155   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698172   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698187   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698202   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698218   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698235   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698274   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698293   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698309   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698325   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698341   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698362   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698378   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698395   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698408   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698424   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698439   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698455   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698469   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698484   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698501   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698514   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698527   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698541   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698554   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698577   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698590   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698606   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698620   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698634   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698650   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698666   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698682   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698696   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698711   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698727   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698743   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698756   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698769   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698784   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698808   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698823   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698840   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698853   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698868   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698886   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698903   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698916   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698933   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698948   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698962   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698976   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698993   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:55:00.699007   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.699018   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699031   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699042   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699067   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.699078   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699105   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699119   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699145   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699157   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:55:00.699195   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699207   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:55:00.699224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:55:00.699256   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699269   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699284   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.699330   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699343   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:55:00.699362   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699380   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.699407   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:55:00.699439   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:55:00.699460   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:55:00.699477   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699497   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699515   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:55:00.699533   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.699619   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699640   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699709   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699722   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.699738   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699750   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.699763   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699774   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.699785   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699807   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.699820   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699834   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699846   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699861   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699872   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699886   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699931   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699946   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699956   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:55:00.699972   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700008   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700031   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700053   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700067   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.700078   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.700091   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:55:00.700102   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700116   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.700129   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700139   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700159   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700168   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:55:00.700179   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.700190   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700199   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700217   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700228   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700240   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.700250   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:55:00.700268   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700281   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700291   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700310   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700321   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700331   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700349   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700364   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700375   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700394   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700405   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700415   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700427   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700454   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700474   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700515   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700529   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700539   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700558   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700570   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700578   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:55:00.700584   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:55:00.700590   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700597   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:55:00.700603   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700612   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:55:00.700620   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.700631   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:55:00.700641   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:55:00.700652   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:55:00.700662   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:55:00.700674   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700684   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:55:00.700696   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:55:00.700707   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:55:00.700717   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:55:00.700758   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:55:00.700770   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:55:00.700779   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:55:00.700790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:55:00.700816   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:55:00.700831   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:55:00.700846   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:55:00.700858   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:55:00.700866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:55:00.700879   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:55:00.700891   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:55:00.700905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:55:00.700912   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:55:00.700921   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:55:00.700932   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.700943   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:55:00.700951   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:55:00.700963   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:55:00.700971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:55:00.700986   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:55:00.701000   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:55:00.701008   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:55:00.701020   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.701037   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.701046   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:55:00.701061   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.701073   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.701082   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.701093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.701100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.701114   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.701124   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701143   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701170   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.701178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.701188   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.701201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.701210   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.701218   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:55:00.701226   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.701237   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701246   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701256   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:55:00.701266   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.701277   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.701287   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.701297   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.701308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.701322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701334   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701348   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701361   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.701372   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.701385   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.701399   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.701410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.701422   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.701433   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701447   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.701458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.701471   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.701483   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.701496   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.701508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.701521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.701533   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701550   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701567   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:55:00.701581   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701592   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701611   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:55:00.701625   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701642   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:55:00.701678   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:55:00.701695   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:55:00.701705   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701716   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701735   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:55:00.701749   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701764   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:55:00.701780   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:55:00.701807   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701827   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701847   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701867   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701886   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.701907   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701928   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701971   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701995   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702014   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702027   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.755255   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:55:00.755293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:55:00.771031   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:55:00.771066   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:55:00.771079   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:55:00.771088   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:55:00.771097   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:55:00.771103   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:55:00.771109   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:55:00.771116   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:55:00.771121   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:55:00.771126   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:55:00.771131   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:55:00.771136   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:55:00.771143   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771153   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:55:00.771158   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:55:00.771165   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771175   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771185   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771191   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:55:00.771196   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:55:00.771202   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:55:00.772218   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:55:00.772246   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:56:00.863293   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:56:00.863340   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.091082059s)
	W1229 06:56:00.863385   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:56:00.863402   17440 logs.go:123] Gathering logs for kube-apiserver [fb6db97d8ffe] ...
	I1229 06:56:00.863420   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6db97d8ffe"
	I1229 06:56:00.897112   17440 command_runner.go:130] ! I1229 06:53:50.588377       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:56:00.897142   17440 command_runner.go:130] ! I1229 06:53:50.597275       1 server.go:150] Version: v1.35.0
	I1229 06:56:00.897153   17440 command_runner.go:130] ! I1229 06:53:50.597323       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:00.897164   17440 command_runner.go:130] ! E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:56:00.898716   17440 logs.go:138] Found kube-apiserver [fb6db97d8ffe] problem: E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.898738   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:00.898750   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:00.935530   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938590   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:00.938653   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:00.938666   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:00.938679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938689   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:00.938712   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.938728   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:00.938838   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:00.938875   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:00.938892   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:00.938902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:00.938913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:00.938922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:00.938935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:00.938946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:00.938958   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:00.938969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:00.938978   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:00.938993   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:00.939003   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:00.939022   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:00.939035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:00.939046   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:00.939053   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:00.939062   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:00.939071   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:00.939081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:00.939091   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:00.939111   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:00.939126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:00.939142   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:00.939162   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.939181   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:00.939213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.939249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:00.939258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:00.939274   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:00.939289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:00.939302   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:00.939324   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:00.939342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.939352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:00.939362   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:00.939377   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:00.939389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:00.939404   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:00.939423   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:00.939439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:00.939458   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939467   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:00.939478   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:00.939528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939544   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939564   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:00.939586   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939603   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939616   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:00.939882   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:00.939915   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939932   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:00.939947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:00.939960   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:00.939998   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.940030   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940064   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940122   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940150   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.940167   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:00.940187   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940204   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940257   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940277   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:00.940301   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940334   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940371   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940425   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940447   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940473   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.955065   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955108   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:56:00.955188   17440 out.go:285] X Problems detected in kube-apiserver [fb6db97d8ffe]:
	W1229 06:56:00.955202   17440 out.go:285]   E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.955209   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955215   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:56:10.957344   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:56:15.961183   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:56:15.961319   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:56:15.981705   17440 command_runner.go:130] > 18d0015c724a
	I1229 06:56:15.982641   17440 logs.go:282] 1 containers: [18d0015c724a]
	I1229 06:56:15.982732   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:56:16.002259   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:56:16.002292   17440 command_runner.go:130] > d81259f64136
	I1229 06:56:16.002322   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:56:16.002399   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:56:16.021992   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:56:16.022032   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:56:16.022113   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:56:16.048104   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:56:16.048133   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:56:16.049355   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:56:16.049441   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:56:16.071523   17440 command_runner.go:130] > 8911777281f4
	I1229 06:56:16.072578   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:56:16.072668   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:56:16.092921   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:56:16.092948   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:56:16.092975   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:56:16.093047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:56:16.113949   17440 logs.go:282] 0 containers: []
	W1229 06:56:16.113983   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:56:16.114047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:56:16.135700   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:56:16.135739   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:56:16.135766   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:56:16.135786   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:56:16.152008   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:56:16.152038   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:56:16.152046   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:56:16.152054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:56:16.152063   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:56:16.152069   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:56:16.152076   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:56:16.152081   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:56:16.152086   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:56:16.152091   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:56:16.152096   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:56:16.152102   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:56:16.152107   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152112   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:56:16.152119   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:56:16.152128   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152148   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152164   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152180   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:56:16.152190   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:56:16.152201   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:56:16.152209   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:56:16.152217   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:56:16.153163   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:56:16.153192   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:56:16.174824   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:56:16.174856   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.174862   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:56:16.174873   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:56:16.174892   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:56:16.174900   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:56:16.174913   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:56:16.174920   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:56:16.174924   17440 command_runner.go:130] !  >
	I1229 06:56:16.174931   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:56:16.174941   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:56:16.174957   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:56:16.174966   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:56:16.174975   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.174985   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:56:16.174994   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:56:16.175003   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:56:16.175012   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:56:16.175024   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:56:16.175033   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:56:16.175040   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:56:16.175050   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:56:16.175074   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:56:16.175325   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:56:16.175351   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:56:16.175362   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:56:16.177120   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:56:16.177144   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:56:16.222627   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:56:16.222665   17440 command_runner.go:130] > 18d0015c724a8       5c6acd67e9cd1       5 seconds ago       Exited              kube-apiserver            3                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:56:16.222684   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       16 seconds ago      Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222707   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       16 seconds ago      Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:56:16.222730   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       2 minutes ago       Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222749   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       2 minutes ago       Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:56:16.222768   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       2 minutes ago       Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:56:16.222810   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       2 minutes ago       Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222831   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       2 minutes ago       Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222851   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:56:16.222879   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       3 minutes ago       Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:56:16.225409   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:56:16.225439   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:56:16.247416   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247449   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.247516   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.247533   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.247545   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247555   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.247582   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.247605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.247698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.247722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:56:16.247733   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.247745   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:56:16.247753   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:56:16.247762   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.247774   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.247782   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.247807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:56:16.247816   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.247825   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.247840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.247851   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.247867   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.247878   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.247886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.247893   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.247902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.247914   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:56:16.247924   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:56:16.247935   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.247952   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.247965   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.247975   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.247988   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.248000   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.248016   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.248035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.248063   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.248074   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.248083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.248092   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.248103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.248113   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.248126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.248136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.248153   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.248166   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.248177   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:56:16.248186   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:56:16.248198   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.248208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248219   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:56:16.248228   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248240   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:56:16.248260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.248287   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248295   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248304   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.248312   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.248320   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248341   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.248352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.254841   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:16.254869   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:16.278647   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.278723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.278736   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.278750   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278759   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.278780   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.278809   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.278890   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.278913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:16.278923   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.278935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:16.278946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:16.278957   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.278971   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.278982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.278996   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:16.279006   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.279014   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.279031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.279040   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.279072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.279083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.279091   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.279101   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.279110   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.279121   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:16.279132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:16.279142   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.279159   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.279173   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.279183   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.279195   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.279226   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.279249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.279260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.279275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.279289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.279300   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.279313   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.279322   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279332   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.279343   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.279359   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.279374   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.279386   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:16.279396   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:16.279406   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:16.279418   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279429   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:16.279439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279451   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279460   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:16.279469   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279479   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.279503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279523   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.279531   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.279541   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279551   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.279562   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.279570   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:16.279585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.279603   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279622   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279661   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279676   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279688   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:16.279698   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279711   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279730   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279741   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:16.279751   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279764   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279785   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279805   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279825   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279836   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279852   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.287590   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:56:16.287613   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:56:16.310292   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:56:16.310320   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:56:16.331009   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:56:16.331044   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:56:16.331054   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:56:16.331067   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331076   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331083   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:56:16.331093   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331114   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:56:16.331232   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331256   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331268   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331275   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331289   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331298   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331316   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331329   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331341   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331355   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331363   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331374   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331386   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331400   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331413   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331425   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331441   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331454   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331468   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331478   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331488   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331496   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331506   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331519   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331529   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331537   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331547   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331555   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331564   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331572   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331580   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331592   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331604   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331618   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331629   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331645   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331659   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331673   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331689   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331703   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331716   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331728   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331740   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331756   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331771   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331784   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331816   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331830   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331847   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331863   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331879   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331894   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331908   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.336243   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:56:16.336267   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:56:16.358115   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358145   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358155   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358165   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358177   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.358186   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:56:16.358194   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:56:16.358203   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358209   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358220   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:56:16.358229   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358241   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358254   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358266   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358278   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358285   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358307   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358315   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358328   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358336   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358343   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:56:16.358350   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358360   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358369   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358377   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358385   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358399   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358408   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358415   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358425   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358436   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358445   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358455   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:56:16.358463   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358474   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358481   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358491   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358500   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358508   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358515   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358530   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.358543   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.358555   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358576   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358584   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.358593   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.358604   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:56:16.358614   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.358621   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.358628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.358635   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358644   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:56:16.358653   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358666   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358685   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358697   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358707   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358716   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358735   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358745   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358755   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358763   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358805   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358818   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358827   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358837   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358847   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358854   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358861   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358867   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.358874   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358893   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358904   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358913   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358921   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.358930   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358942   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358950   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358959   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358970   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358979   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358986   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358992   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359001   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359011   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:56:16.359021   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359029   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359036   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359042   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359052   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:56:16.359060   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359071   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359084   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359106   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359113   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359135   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359144   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:56:16.359154   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.359164   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.359172   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.359182   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.359190   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.359198   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.359206   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.359213   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.359244   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359260   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359275   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359288   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359300   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:56:16.359313   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359328   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359343   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359357   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359372   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359386   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359399   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359410   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359422   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.359435   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.359442   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359452   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359460   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:56:16.359468   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359474   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359481   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359487   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:56:16.359494   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359502   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:56:16.359511   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359521   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359532   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359544   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359553   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359561   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359574   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359590   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359602   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359617   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359630   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359646   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359660   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359676   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359689   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359706   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359719   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359731   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359744   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359763   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359779   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:56:16.359800   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:56:16.359813   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:56:16.359827   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:56:16.359837   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:56:16.359852   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:56:16.359864   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:56:16.359878   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:56:16.359890   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:56:16.359904   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:56:16.359916   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:56:16.359932   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:56:16.359945   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:56:16.359960   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:56:16.359975   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:56:16.359988   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:56:16.360003   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:56:16.360019   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:56:16.360037   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:56:16.360051   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:56:16.360064   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:56:16.360074   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:56:16.360085   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.360093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.360102   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.360113   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.360121   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.360130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.360163   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.360172   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.360189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:56:16.360197   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.360210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360218   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:56:16.360225   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360236   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.360245   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.360255   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.360263   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.360271   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.360280   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.360288   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.360297   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.360308   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.360317   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.360326   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360338   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360353   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360365   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360380   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360392   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360410   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360426   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:56:16.360441   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360454   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360467   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360482   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360494   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360510   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360525   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360538   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360553   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360566   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360582   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360599   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:56:16.360617   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360628   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:56:16.360643   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:56:16.360656   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360671   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360682   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:56:16.360699   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360711   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360726   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360736   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360749   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360762   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.377860   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:56:16.377891   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:56:16.394828   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:56:16.394877   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394896   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394920   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394952   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394976   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:56:16.394988   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.395012   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395045   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395075   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395109   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:56:16.395143   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395179   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:56:16.395221   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:56:16.395242   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395263   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395283   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395324   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:56:16.395347   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.395368   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395390   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.395423   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395451   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395496   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:56:16.395529   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:56:16.395551   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395578   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395597   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395618   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395641   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395678   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395702   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:56:16.395719   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395743   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395770   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395806   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395831   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395859   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395885   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395903   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395929   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.395956   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395981   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396000   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396027   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396052   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396087   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396114   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396138   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396161   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396186   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396267   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396295   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396315   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396339   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396362   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396387   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396413   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396433   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396458   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396484   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396508   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396526   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396550   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396585   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396609   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396634   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396662   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396687   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396711   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396739   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396763   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396786   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396821   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396848   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396872   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396891   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396919   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396943   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396966   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396989   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397016   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397064   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397089   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397114   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397139   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397161   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397187   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397211   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397233   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397256   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397281   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397307   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397330   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397358   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397387   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397424   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397450   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397477   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397500   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397521   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397544   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397571   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397594   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397618   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397644   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397668   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397686   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397742   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397766   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397786   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397818   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397849   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397872   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397897   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397918   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:56:16.397940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.397961   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.397984   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.398006   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398027   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.398047   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.398071   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398100   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398122   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398141   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398162   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398209   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398244   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:56:16.398272   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:56:16.398317   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398350   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:56:16.398371   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398394   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398413   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398456   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.398481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398498   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:56:16.398525   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398557   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.398599   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:56:16.398632   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:56:16.398661   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:56:16.398683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398746   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:56:16.398769   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.398813   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398843   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398873   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398910   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398942   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.398985   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399007   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.399028   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399052   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.399082   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399104   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.399121   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399145   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399170   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399191   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399209   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399231   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399253   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399275   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399295   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:56:16.399309   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399328   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.399366   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399402   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399416   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.399427   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.399440   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:56:16.399454   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399467   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.399491   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399517   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399553   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399565   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:56:16.399576   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.399588   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399598   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399618   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399629   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399640   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.399653   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:56:16.399671   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399684   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399694   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399724   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399741   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399752   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399771   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399782   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399801   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399822   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399834   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399845   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399857   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399866   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399885   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399928   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.400087   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.400109   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.400130   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.400140   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400147   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:56:16.400153   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:56:16.400162   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400169   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:56:16.400175   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400184   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:56:16.400193   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.400201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:56:16.400213   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:56:16.400222   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:56:16.400233   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:56:16.400243   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400253   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:56:16.400262   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:56:16.400272   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:56:16.400281   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:56:16.400693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:56:16.400713   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:56:16.400724   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:56:16.400734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:56:16.400742   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:56:16.400751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:56:16.400760   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:56:16.400768   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:56:16.400780   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:56:16.400812   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:56:16.400833   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:56:16.400853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:56:16.400868   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:56:16.400877   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:56:16.400887   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.400896   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:56:16.400903   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:56:16.400915   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:56:16.400924   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:56:16.400936   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:56:16.400950   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:56:16.400961   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:56:16.400972   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.400985   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:56:16.400993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:56:16.401003   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:56:16.401016   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:56:16.401027   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:56:16.401036   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:56:16.401045   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:56:16.401053   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:56:16.401070   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:56:16.401083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.401100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401132   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:56:16.401141   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:56:16.401150   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:56:16.401160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:56:16.401173   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:56:16.401180   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:56:16.401189   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:56:16.401198   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401209   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401217   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:56:16.401228   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:56:16.401415   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:56:16.401435   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:56:16.401444   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:56:16.401456   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:56:16.401467   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401486   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401529   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.401553   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.401575   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.401589   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.401602   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.401614   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.401628   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401640   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.401653   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.401667   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.401679   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.401693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.401706   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.401720   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.401733   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401745   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.401762   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:56:16.401816   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401840   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401871   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:56:16.401900   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401920   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:56:16.401958   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.401977   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.401987   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402002   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402019   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:56:16.402033   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402048   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:56:16.402065   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402085   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402107   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402134   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402169   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402204   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.402228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402250   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402272   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402294   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402314   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402335   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402349   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402367   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:56:16.402405   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402421   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402433   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402444   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402530   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402557   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402569   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402585   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402600   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402639   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402655   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402666   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402677   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402697   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.402714   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402726   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402737   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402752   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:56:16.402917   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.402934   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402947   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.402959   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.402972   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.402996   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403011   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403026   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403043   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403056   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403070   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403082   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403096   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403110   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403125   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403138   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403152   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403292   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403310   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403325   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403339   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403361   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403376   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403389   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403402   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403417   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403428   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403450   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403464   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.403480   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403495   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403506   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403636   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403671   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403686   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403702   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403720   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403739   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403753   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403767   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403780   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403806   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403820   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403833   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403850   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403871   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403890   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403914   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403936   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403952   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.403976   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403994   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404007   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.404022   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404034   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.404046   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.404066   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.404085   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.404122   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.454878   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:56:16.454917   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:56:16.478085   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.478126   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:56:16.478136   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:56:16.478148   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:56:16.478155   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:56:16.478166   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.478175   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:56:16.478185   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.478194   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:56:16.478203   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.478825   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:56:16.478843   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:56:16.501568   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.501592   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.501601   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.501610   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.501623   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.501630   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.501636   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.501982   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:56:16.501996   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:56:16.524487   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.524514   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.524523   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.524767   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.524788   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.524805   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.524812   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.526406   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:56:16.526437   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:57:16.604286   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:57:16.606268   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.079810784s)
	W1229 06:57:16.606306   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:57:16.606317   17440 logs.go:123] Gathering logs for kube-apiserver [18d0015c724a] ...
	I1229 06:57:16.606331   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d0015c724a"
	I1229 06:57:16.636305   17440 command_runner.go:130] ! Error response from daemon: No such container: 18d0015c724a
	W1229 06:57:16.636367   17440 logs.go:130] failed kube-apiserver [18d0015c724a]: command: /bin/bash -c "docker logs --tail 400 18d0015c724a" /bin/bash -c "docker logs --tail 400 18d0015c724a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 18d0015c724a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 18d0015c724a
	
	** /stderr **
	I1229 06:57:16.636376   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:57:16.636391   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:57:16.657452   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:19.160135   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:57:24.162053   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:57:24.162161   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:57:24.182182   17440 command_runner.go:130] > b206d555ad19
	I1229 06:57:24.183367   17440 logs.go:282] 1 containers: [b206d555ad19]
	I1229 06:57:24.183464   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:57:24.206759   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:57:24.206821   17440 command_runner.go:130] > d81259f64136
	I1229 06:57:24.206853   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:57:24.206926   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:57:24.228856   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:57:24.228897   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:57:24.228968   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:57:24.247867   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:57:24.247890   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:57:24.249034   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:57:24.249130   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:57:24.268209   17440 command_runner.go:130] > 8911777281f4
	I1229 06:57:24.269160   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:57:24.269243   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:57:24.288837   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:57:24.288871   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:57:24.290245   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:57:24.290337   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:57:24.312502   17440 logs.go:282] 0 containers: []
	W1229 06:57:24.312531   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:57:24.312592   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:57:24.334811   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:57:24.334849   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:57:24.334875   17440 logs.go:123] Gathering logs for kube-apiserver [b206d555ad19] ...
	I1229 06:57:24.334888   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b206d555ad19"
	I1229 06:57:24.357541   17440 command_runner.go:130] ! I1229 06:57:22.434262       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:57:24.357567   17440 command_runner.go:130] ! I1229 06:57:22.436951       1 server.go:150] Version: v1.35.0
	I1229 06:57:24.357577   17440 command_runner.go:130] ! I1229 06:57:22.436991       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.357602   17440 command_runner.go:130] ! E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:57:24.359181   17440 logs.go:138] Found kube-apiserver [b206d555ad19] problem: E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:57:24.359206   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:57:24.359218   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:57:24.381077   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:24.381103   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:57:24.381113   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.381121   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:57:24.381131   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.381137   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:57:24.381144   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:57:24.382680   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:57:24.382711   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:57:24.427354   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:57:24.427382   17440 command_runner.go:130] > b206d555ad194       5c6acd67e9cd1       2 seconds ago        Exited              kube-apiserver            5                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:57:24.427400   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427411   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       About a minute ago   Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:57:24.427421   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       3 minutes ago        Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427441   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       3 minutes ago        Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:57:24.427454   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       3 minutes ago        Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:57:24.427465   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       3 minutes ago        Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427477   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       3 minutes ago        Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427488   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:57:24.427509   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       4 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:57:24.430056   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:57:24.430095   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:57:24.453665   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453712   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453738   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453770   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453809   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453838   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453867   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453891   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453911   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453928   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453945   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453961   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453974   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454002   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454022   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454058   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454074   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454087   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454103   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454120   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454135   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454149   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454165   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454179   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454194   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454208   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454224   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454246   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454262   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454276   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454294   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454310   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454326   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454342   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454358   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454371   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454386   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454401   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454423   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454447   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454472   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454500   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454519   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454533   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454549   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454565   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454579   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454593   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454608   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454625   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454640   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454655   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:57:24.454667   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.454680   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.454697   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.454714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.454729   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.454741   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.454816   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454842   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454855   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454870   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454881   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.454896   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454912   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:57:24.454957   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454969   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:57:24.454987   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455012   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:57:24.455025   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.455039   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455081   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455097   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.455110   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:57:24.455125   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455144   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.455165   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:57:24.455186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:57:24.455204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:57:24.455224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455275   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:57:24.455294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455326   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455345   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455366   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455386   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455404   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.455423   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455446   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.455472   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455490   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.455506   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455528   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.455550   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455573   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455588   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455603   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455615   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.455628   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455640   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455657   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455669   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:57:24.455681   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455699   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.455720   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455739   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455750   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.455810   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.455823   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:57:24.455835   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455848   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.455860   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455872   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.455892   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455904   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:57:24.455916   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.455930   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455967   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.455990   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456008   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456019   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.456031   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:57:24.456052   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456067   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.456078   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.456100   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.456114   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456124   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456144   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456159   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456169   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456191   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456205   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456216   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456229   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456239   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456260   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456304   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.456318   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456331   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456352   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456364   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456372   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:57:24.456379   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:57:24.456386   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456396   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:57:24.456406   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456423   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:57:24.456441   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.456458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:57:24.456472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:57:24.456487   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:57:24.456503   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:57:24.456520   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456540   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:57:24.456560   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:57:24.456573   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:57:24.456584   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:57:24.456626   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:57:24.456639   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:57:24.456647   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:57:24.456657   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:57:24.456665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:57:24.456676   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:57:24.456685   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:57:24.456695   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:57:24.456703   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:57:24.456714   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:57:24.456726   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:57:24.456739   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:57:24.456748   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:57:24.456761   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:57:24.456771   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.456782   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:57:24.456790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:57:24.456811   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:57:24.456821   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:57:24.456832   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:57:24.456845   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:57:24.456853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:57:24.456866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456875   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:57:24.456885   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:57:24.456893   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:57:24.456907   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:57:24.456918   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:57:24.456927   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:57:24.456937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:57:24.456947   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:57:24.456959   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:57:24.456971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456990   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457011   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457023   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:57:24.457032   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:57:24.457044   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:57:24.457054   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:57:24.457067   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:57:24.457074   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:57:24.457083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:57:24.457093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457105   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457112   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:57:24.457125   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:57:24.457133   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:57:24.457145   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:57:24.457154   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:57:24.457168   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:57:24.457178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457192   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457205   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457220   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.457235   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.457247   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.457258   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.457271   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.457284   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.457299   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457310   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.457322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.457333   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.457345   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.457359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.457370   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.457381   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.457396   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457436   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:57:24.457460   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457481   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457500   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:57:24.457515   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457533   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:57:24.457586   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.457604   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.457613   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457633   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457649   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:57:24.457664   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457680   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:57:24.457697   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.457717   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457740   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457763   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457785   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457817   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.457904   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457927   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457976   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457996   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458019   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458034   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458050   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:57:24.458090   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458106   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458116   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458130   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458141   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458158   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458170   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458184   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458198   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458263   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458295   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458316   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458339   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458367   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.458389   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.458409   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458429   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458447   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:57:24.458468   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.458490   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458512   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458529   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458542   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458572   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458587   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458602   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458617   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458632   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458644   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458659   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458674   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458686   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458702   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458717   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458732   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458746   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458762   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458777   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458790   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458824   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458839   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458852   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458865   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458879   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458889   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458911   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458925   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458939   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458952   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458964   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458983   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458998   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459016   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459031   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459048   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459062   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.459090   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459104   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459118   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459132   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459145   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459158   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459174   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459186   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459201   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459215   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459225   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459247   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459261   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459274   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459286   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459302   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459314   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459334   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459352   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459392   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.459418   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.459438   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.459461   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459483   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459502   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459515   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459537   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459552   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459567   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459579   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459599   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459613   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459634   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459649   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459662   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459676   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459693   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459707   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459720   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459741   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459753   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459769   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459782   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459806   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459829   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459845   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459859   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459875   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459897   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459911   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459924   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459938   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459950   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459972   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459987   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460000   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460016   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460036   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460053   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.460094   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.460108   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460124   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.460136   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.460148   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460162   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.460182   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460195   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460206   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460221   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460236   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.460254   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460269   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460283   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460296   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460308   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460321   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460334   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460347   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460367   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460381   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460395   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460414   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460450   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460499   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.513870   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:57:24.513913   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:57:24.542868   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.542904   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:57:24.542974   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:57:24.542992   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:57:24.543020   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.543037   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:57:24.543067   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543085   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:57:24.543199   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:57:24.543237   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:57:24.543258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:57:24.543276   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:57:24.543291   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:57:24.543306   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:57:24.543327   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:57:24.543344   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:57:24.543365   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:57:24.543380   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:57:24.543393   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:57:24.543419   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:57:24.543437   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:57:24.543464   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:57:24.543483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:57:24.543499   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:57:24.543511   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:57:24.543561   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:57:24.543585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:57:24.543605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:57:24.543623   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:57:24.543659   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:57:24.543680   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:57:24.543701   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:57:24.543722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.543744   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:57:24.543770   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543821   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:57:24.543840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:57:24.543865   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:57:24.543886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:57:24.543908   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:57:24.543927   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:57:24.543945   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.543962   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:57:24.543980   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:57:24.544010   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:57:24.544031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:57:24.544065   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:57:24.544084   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:57:24.544103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:57:24.544120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:57:24.544157   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544176   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544193   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:57:24.544213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544224   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:57:24.544264   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544298   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:57:24.544314   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:57:24.544331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544345   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:57:24.544364   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:57:24.544381   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:57:24.544405   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.544430   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544465   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544517   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544537   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.544554   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:57:24.544575   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544595   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544623   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544641   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:57:24.544662   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544683   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544711   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544730   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544767   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544828   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.552509   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:57:24.552540   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:57:24.575005   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:57:24.575036   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:57:24.597505   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597545   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597560   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597577   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597596   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.597610   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:57:24.597628   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:57:24.597642   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597654   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.597667   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:57:24.597682   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.597705   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.597733   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.597753   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.597765   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.597773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:57:24.597803   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.597814   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:57:24.597825   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.597834   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.597841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:57:24.597848   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597856   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.597866   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.597874   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.597883   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.597900   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.597909   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.597916   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597925   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597936   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597944   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597953   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:57:24.597960   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.597973   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.597981   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.597991   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.597999   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598010   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598017   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598029   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598041   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598054   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598067   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598074   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598084   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598095   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:57:24.598104   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598111   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598117   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598126   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598132   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:57:24.598141   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598154   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598174   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598186   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598196   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598205   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598224   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598235   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598246   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598256   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598264   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598273   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598289   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598297   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598306   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598314   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598320   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.598327   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598334   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598345   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.598354   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.598365   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.598373   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.598381   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.598389   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.598400   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.598415   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.598431   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598447   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598463   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598476   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598492   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598503   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:57:24.598513   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598522   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598531   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598538   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598545   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:57:24.598555   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598578   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598591   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598602   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598613   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598621   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598642   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598653   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598664   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598674   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598683   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598693   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598701   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598716   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598724   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598732   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598760   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598774   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598787   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598815   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598832   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:57:24.598845   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598860   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598873   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598889   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598904   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598918   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598933   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598946   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598958   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598973   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598980   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598989   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598999   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:57:24.599008   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.599015   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.599022   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.599030   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:57:24.599036   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.599043   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:57:24.599054   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.599065   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.599077   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.599088   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.599099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.599107   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:57:24.599120   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599138   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599151   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599168   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599185   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599198   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599228   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599257   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599270   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599285   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599297   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599319   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.599331   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:57:24.599346   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:57:24.599359   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:57:24.599376   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:57:24.599387   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:57:24.599405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:57:24.599423   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:57:24.599452   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:57:24.599472   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:57:24.599489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:57:24.599503   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:57:24.599517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:57:24.599529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:57:24.599544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:57:24.599559   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:57:24.599572   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:57:24.599587   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:57:24.599602   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:57:24.599615   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:57:24.599631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:57:24.599644   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:57:24.599654   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:57:24.599664   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.599673   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.599682   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.599692   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.599700   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.599710   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.599747   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.599756   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.599772   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:57:24.599782   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599789   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.599806   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599814   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:57:24.599822   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599830   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.599841   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.599849   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.599860   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.599868   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.599879   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.599886   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.599896   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.599907   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.599914   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.599922   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599934   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599953   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599970   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599983   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600000   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600017   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600034   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:57:24.600049   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600063   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600079   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600092   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600107   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600121   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600137   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600152   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600164   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600177   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600190   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600207   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:57:24.600223   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600235   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:57:24.600247   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:57:24.600261   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600276   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600288   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:57:24.600304   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600317   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600331   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600345   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600357   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600373   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600386   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600403   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.600423   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:57:24.600448   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600472   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600490   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.619075   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:57:24.619123   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:58:24.700496   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:58:24.700542   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.081407425s)
	W1229 06:58:24.700578   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:58:24.700591   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:58:24.700607   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:58:24.726206   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726238   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:58:24.726283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:58:24.726296   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:58:24.726311   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726321   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:58:24.726342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726358   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:58:24.726438   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:58:24.726461   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:58:24.726472   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:58:24.726483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:58:24.726492   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:58:24.726503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:58:24.726517   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:58:24.726528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:58:24.726540   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:58:24.726552   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:58:24.726560   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:58:24.726577   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:58:24.726590   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:58:24.726607   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:58:24.726618   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:58:24.726629   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:58:24.726636   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:58:24.726647   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:58:24.726657   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:58:24.726670   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:58:24.726680   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:58:24.726698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:58:24.726711   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:58:24.726723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:58:24.726735   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:58:24.726750   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:58:24.726765   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:58:24.726784   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726826   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:58:24.726839   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:58:24.726848   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:58:24.726858   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:58:24.726870   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:58:24.726883   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:58:24.726896   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:58:24.726906   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:58:24.726922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:58:24.726935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:58:24.726947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:58:24.726956   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:58:24.726969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:58:24.726982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.726997   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:58:24.727009   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727020   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.727029   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:58:24.727039   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727056   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:58:24.727064   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:58:24.727089   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:58:24.727100   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727109   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:58:24.727132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:58:24.733042   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:58:24.733064   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:58:24.755028   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.755231   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:58:24.755256   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:58:24.776073   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:58:24.776109   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.776120   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:58:24.776135   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:58:24.776154   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:58:24.776162   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:58:24.776180   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:58:24.776188   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:58:24.776195   17440 command_runner.go:130] !  >
	I1229 06:58:24.776212   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:58:24.776224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:58:24.776249   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:58:24.776257   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:58:24.776266   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.776282   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:58:24.776296   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:58:24.776307   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:58:24.776328   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:58:24.776350   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:58:24.776366   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:58:24.776376   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:58:24.776388   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:58:24.776404   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:58:24.776420   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:58:24.776439   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:58:24.776453   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:58:24.778558   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:24.778595   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:24.793983   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:24.794025   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:24.794040   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:24.794054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:24.794069   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:24.794079   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:24.794096   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:24.794106   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:24.794117   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:24.794125   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:24.794136   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:24.794146   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:24.794160   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794167   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:24.794178   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:24.794186   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794196   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794207   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794215   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:24.794221   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:24.794229   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:24.794241   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:24.794252   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:24.794260   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.794271   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.795355   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:58:24.795387   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:58:24.820602   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.820635   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:58:24.820646   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:58:24.820657   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:58:24.820665   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:58:24.820672   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.820681   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:58:24.820692   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.820698   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:58:24.820705   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.822450   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:58:24.822473   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:58:24.844122   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.844156   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:58:24.844170   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.844184   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:58:24.844201   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:24.844210   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:58:24.844218   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.845429   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:58:24.845453   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:58:24.867566   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:58:24.867597   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:58:24.867607   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:58:24.867615   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867622   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867633   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:58:24.867653   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867681   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:58:24.867694   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867704   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867719   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867734   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867750   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867763   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867817   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867836   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867848   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867859   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867871   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867883   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867891   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867901   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867914   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867926   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867944   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867956   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867972   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867982   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867997   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868013   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868028   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868048   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868063   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868071   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868081   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868098   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868111   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868127   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868140   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868153   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868164   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868177   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868192   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868207   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868221   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868236   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868247   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868258   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868275   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868290   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868304   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868320   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868332   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868342   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868358   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868373   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868385   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868400   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868414   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868425   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868438   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.872821   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872842   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:58:24.872901   17440 out.go:285] X Problems detected in kube-apiserver [b206d555ad19]:
	W1229 06:58:24.872915   17440 out.go:285]   E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:58:24.872919   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872923   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:58:34.875381   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:58:39.877679   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:39.877779   17440 kubeadm.go:602] duration metric: took 4m48.388076341s to restartPrimaryControlPlane
	W1229 06:58:39.877879   17440 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1229 06:58:39.877946   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:39.930050   17440 command_runner.go:130] ! W1229 06:58:39.921577    8187 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:49.935089   17440 command_runner.go:130] ! W1229 06:58:49.926653    8187 reset.go:141] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:49.935131   17440 command_runner.go:130] ! W1229 06:58:49.926754    8187 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:50.998307   17440 command_runner.go:130] > [reset] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I1229 06:58:50.998341   17440 command_runner.go:130] > [reset] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
	I1229 06:58:50.998348   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:50.998357   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/minikube/etcd
	I1229 06:58:50.998366   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:50.998372   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:50.998386   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:50.998407   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:50.998417   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:50.998428   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:50.998434   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:50.998442   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:50.998458   17440 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (11.120499642s)
	I1229 06:58:50.998527   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.015635   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:58:51.028198   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.040741   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.040780   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.040811   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.040826   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040865   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040877   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.040925   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.051673   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052090   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052155   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.064755   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.076455   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076517   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076577   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.088881   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.099253   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099652   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099710   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.111487   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.122532   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122905   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122972   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.135143   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.355420   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355450   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355543   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.355556   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.355615   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355625   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355790   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.355837   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.356251   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356265   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356317   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.356324   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.357454   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357471   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357544   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.357561   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	W1229 06:58:51.357680   17440 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 06:58:51.357753   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:51.401004   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.401036   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I1229 06:58:51.401047   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:51.408535   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:51.413813   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:51.415092   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:51.415117   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:51.415128   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:51.415137   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:51.415145   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:51.415645   17440 command_runner.go:130] ! W1229 06:58:51.391426    8625 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:51.415670   17440 command_runner.go:130] ! W1229 06:58:51.392518    8625 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:51.415739   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.432316   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.444836   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.444860   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.444867   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.444874   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445417   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445435   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.445485   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.457038   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457099   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457146   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.469980   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.480965   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481435   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481498   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.493408   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.504342   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504404   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504468   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.516567   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.526975   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527475   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527532   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.539365   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.587038   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587068   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587108   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.587113   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.738880   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738912   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738963   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.738975   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.739029   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739038   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739157   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739166   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739271   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739294   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739348   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739355   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739406   17440 kubeadm.go:403] duration metric: took 5m0.289116828s to StartCluster
	I1229 06:58:51.739455   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 06:58:51.739507   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 06:58:51.776396   17440 cri.go:96] found id: ""
	I1229 06:58:51.776420   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.776428   17440 logs.go:284] No container was found matching "kube-apiserver"
	I1229 06:58:51.776434   17440 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 06:58:51.776522   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 06:58:51.808533   17440 cri.go:96] found id: ""
	I1229 06:58:51.808556   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.808563   17440 logs.go:284] No container was found matching "etcd"
	I1229 06:58:51.808570   17440 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 06:58:51.808625   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 06:58:51.841860   17440 cri.go:96] found id: ""
	I1229 06:58:51.841887   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.841894   17440 logs.go:284] No container was found matching "coredns"
	I1229 06:58:51.841900   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 06:58:51.841955   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 06:58:51.875485   17440 cri.go:96] found id: ""
	I1229 06:58:51.875512   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.875520   17440 logs.go:284] No container was found matching "kube-scheduler"
	I1229 06:58:51.875526   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 06:58:51.875576   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 06:58:51.909661   17440 cri.go:96] found id: ""
	I1229 06:58:51.909699   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.909712   17440 logs.go:284] No container was found matching "kube-proxy"
	I1229 06:58:51.909720   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 06:58:51.909790   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 06:58:51.943557   17440 cri.go:96] found id: ""
	I1229 06:58:51.943594   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.943607   17440 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 06:58:51.943616   17440 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 06:58:51.943685   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 06:58:51.979189   17440 cri.go:96] found id: ""
	I1229 06:58:51.979219   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.979228   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:58:51.979234   17440 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 06:58:51.979285   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 06:58:52.013436   17440 cri.go:96] found id: ""
	I1229 06:58:52.013472   17440 logs.go:282] 0 containers: []
	W1229 06:58:52.013482   17440 logs.go:284] No container was found matching "storage-provisioner"
	I1229 06:58:52.013494   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:58:52.013507   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:58:52.030384   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.030429   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.030454   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030506   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030530   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030550   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030574   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030601   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030643   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:58:52.030670   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030694   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:58:52.030721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.030757   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:58:52.030787   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030826   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030853   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030893   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.030921   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.030943   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:58:52.030981   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.031015   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.031053   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:58:52.031087   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:58:52.031117   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:58:52.031146   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031223   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:58:52.031253   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031281   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031311   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031347   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031383   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031422   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031445   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.031467   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031491   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.031516   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031538   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.031562   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031584   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.031606   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031628   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031651   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031673   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031695   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.031717   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031763   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031786   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:58:52.031824   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031855   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.031894   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031949   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.031981   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.032005   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.032025   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:58:52.032048   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032069   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.032093   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032112   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032150   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032170   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:58:52.032192   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.032214   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032234   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032269   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032290   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032314   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.032335   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:58:52.032371   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032395   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032414   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032452   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032473   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032495   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032530   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032552   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032573   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032608   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032631   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032655   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032676   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032696   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032735   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032819   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.032845   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032864   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032899   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032919   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.032935   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.032948   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.032960   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.032981   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:58:52.032995   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.033012   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:58:52.033029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:52.033042   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:58:52.033062   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:58:52.033080   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:58:52.033101   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:58:52.033120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.033138   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:58:52.033166   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:58:52.033187   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:58:52.033206   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:58:52.033274   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:58:52.033294   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:58:52.033309   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:58:52.033326   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:58:52.033343   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:58:52.033359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:58:52.033378   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:58:52.033398   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:58:52.033413   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:58:52.033431   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:58:52.033453   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:58:52.033476   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:58:52.033492   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:58:52.033507   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:58:52.033526   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033542   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:58:52.033559   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:58:52.033609   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:58:52.033625   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:58:52.033642   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:58:52.033665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:58:52.033681   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:58:52.033700   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033718   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:58:52.033734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:58:52.033751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:58:52.033776   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:58:52.033808   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:58:52.033826   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:58:52.033840   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:58:52.033855   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:58:52.033878   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:58:52.033905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033974   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:58:52.034010   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:58:52.034030   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:58:52.034050   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:58:52.034084   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:58:52.034099   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:58:52.034116   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:58:52.034134   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034152   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034167   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:58:52.034186   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:58:52.034203   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:58:52.034224   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:58:52.034241   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:58:52.034265   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:58:52.034286   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034332   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034358   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.034380   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.034404   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.034427   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.034450   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.034472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.034499   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.034544   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.034566   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.034588   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.034611   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.034633   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.034655   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:58:52.034678   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034697   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.034724   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:58:52.034749   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034771   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034819   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:58:52.034843   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034873   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:58:52.034936   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.034963   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.034993   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035018   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035049   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:58:52.035071   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035099   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:58:52.035126   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.035159   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035194   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035263   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035299   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.035333   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035368   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035408   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035445   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035477   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035512   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035534   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035563   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:58:52.035631   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035658   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035677   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035699   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035720   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035749   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035771   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035814   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035838   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035902   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035927   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035947   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035978   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036010   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.036038   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.036061   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036082   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036102   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:58:52.036121   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.036141   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036165   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036190   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036212   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036251   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036275   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036299   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036323   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036345   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036369   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036393   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036418   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036441   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036464   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036488   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036511   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036536   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036561   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036584   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036606   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036642   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036664   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036687   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036711   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036734   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036754   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036806   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036895   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036922   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036945   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036973   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037009   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037032   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037052   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037098   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037122   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037144   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.037168   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037189   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037212   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037235   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037254   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037278   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037303   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037325   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037348   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037372   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037392   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037424   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037449   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037472   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037497   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037518   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037539   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037574   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037604   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.037669   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.037694   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.037713   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.037734   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.037760   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037784   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037816   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037851   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037875   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037897   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037917   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037950   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037981   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038011   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038035   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038059   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038079   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038102   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038125   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038147   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038182   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038203   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038223   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038243   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038260   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038297   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038321   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038344   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038365   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038401   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038423   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038449   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038471   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038491   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038526   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038549   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038585   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038608   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038643   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038670   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.038735   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.038758   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038785   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.038817   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.038842   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038869   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.038900   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038922   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038946   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038977   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039003   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.039034   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039059   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039082   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039102   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039126   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039149   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039171   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039191   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039227   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039252   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039275   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039295   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039330   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039396   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039419   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.302837    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039444   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.341968    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039468   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342033    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039488   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: I1229 06:57:30.342050    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039523   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342233    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039550   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.608375    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.039576   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186377    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039598   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186459    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.039675   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188187    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039700   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188267    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.039715   17440 command_runner.go:130] > Dec 29 06:57:37 functional-695625 kubelet[6517]: I1229 06:57:37.010219    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.039749   17440 command_runner.go:130] > Dec 29 06:57:38 functional-695625 kubelet[6517]: E1229 06:57:38.741770    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039773   17440 command_runner.go:130] > Dec 29 06:57:40 functional-695625 kubelet[6517]: E1229 06:57:40.303258    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039808   17440 command_runner.go:130] > Dec 29 06:57:50 functional-695625 kubelet[6517]: E1229 06:57:50.304120    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039837   17440 command_runner.go:130] > Dec 29 06:57:55 functional-695625 kubelet[6517]: E1229 06:57:55.743031    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.039903   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 kubelet[6517]: E1229 06:57:58.103052    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.272240811 +0000 UTC m=+0.287990191,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039929   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.304627    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039954   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432518    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.039991   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432667    6517 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)
	I1229 06:58:52.040014   17440 command_runner.go:130] > Dec 29 06:58:10 functional-695625 kubelet[6517]: E1229 06:58:10.305485    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040037   17440 command_runner.go:130] > Dec 29 06:58:11 functional-695625 kubelet[6517]: E1229 06:58:11.012407    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.040068   17440 command_runner.go:130] > Dec 29 06:58:12 functional-695625 kubelet[6517]: E1229 06:58:12.743824    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040086   17440 command_runner.go:130] > Dec 29 06:58:18 functional-695625 kubelet[6517]: I1229 06:58:18.014210    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.040107   17440 command_runner.go:130] > Dec 29 06:58:20 functional-695625 kubelet[6517]: E1229 06:58:20.306630    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040127   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186554    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040149   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186719    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.040176   17440 command_runner.go:130] > Dec 29 06:58:29 functional-695625 kubelet[6517]: E1229 06:58:29.745697    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040195   17440 command_runner.go:130] > Dec 29 06:58:30 functional-695625 kubelet[6517]: E1229 06:58:30.307319    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040256   17440 command_runner.go:130] > Dec 29 06:58:32 functional-695625 kubelet[6517]: E1229 06:58:32.105206    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.286010652 +0000 UTC m=+0.301760032,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.040279   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184790    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040300   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184918    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040319   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: I1229 06:58:39.184949    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040354   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040377   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040397   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.040413   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040433   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040455   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040477   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040498   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040520   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040538   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040576   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040596   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040619   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040640   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040658   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040692   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040711   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040729   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040741   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040764   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040784   17440 command_runner.go:130] > Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040807   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.040815   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.040821   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.040830   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	I1229 06:58:52.093067   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:52.093106   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:52.108863   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:52.108898   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:52.108912   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:52.108925   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:52.108937   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:52.108945   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:52.108951   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:52.108957   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:52.108962   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:52.108971   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:52.108975   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:52.108980   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:52.108992   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.108997   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:52.109006   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:52.109011   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.109021   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109031   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109036   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:52.109043   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:52.109048   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:52.109055   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:52.109062   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:52.109067   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109072   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109080   17440 command_runner.go:130] > [Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109088   17440 command_runner.go:130] > [  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109931   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:58:52.109946   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:59:52.193646   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:59:52.193695   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.083736259s)
	W1229 06:59:52.193730   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:59:52.193743   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:59:52.193757   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:59:52.211424   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211464   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.211503   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.211519   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.211538   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:59:52.211555   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:59:52.211569   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:59:52.211587   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211601   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.211612   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:59:52.211630   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.211652   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.211672   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.211696   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.211714   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.211730   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:59:52.211773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.211790   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:59:52.211824   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.211841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.211855   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:59:52.211871   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211884   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.211899   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.211913   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.211926   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.211948   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.211959   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.211970   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211984   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212011   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212025   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212039   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:59:52.212064   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212079   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212093   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212108   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212125   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212139   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212172   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.212192   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.212215   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212237   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212252   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212266   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212285   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:59:52.212301   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.212316   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.212331   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.212341   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.212357   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:59:52.212372   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.212392   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.212423   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.212444   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.212461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.212477   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:59:52.212512   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.212529   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:59:52.212547   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.212562   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.212577   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.212594   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.212612   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.212628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.212643   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.212656   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.212671   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212684   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.212699   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212714   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212732   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212751   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212767   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212783   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.212808   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212827   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212844   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212864   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212899   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212916   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212932   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212949   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212974   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:59:52.212995   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213006   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213020   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213033   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213055   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:59:52.213073   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213115   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213135   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213153   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213169   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213204   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.213221   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:59:52.213242   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.213258   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.213275   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.213291   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.213308   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.213321   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.213334   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.213348   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.213387   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213414   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213440   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213465   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213486   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:59:52.213507   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213528   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213549   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213573   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213595   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213616   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213637   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213655   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213675   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.213697   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.213709   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.213724   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.213735   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:59:52.213749   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213759   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213774   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213786   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:59:52.213809   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213822   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:59:52.213839   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213856   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213874   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213891   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213907   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213920   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213942   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213963   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213985   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214006   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214028   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214055   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214078   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214122   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214144   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214166   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214190   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214211   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214242   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.214258   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:59:52.214283   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:59:52.214298   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:59:52.214323   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:59:52.214341   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:59:52.214365   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:59:52.214380   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:59:52.214405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:59:52.214421   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:59:52.214447   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:59:52.214464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:59:52.214489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:59:52.214506   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:59:52.214531   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:59:52.214553   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:59:52.214576   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:59:52.214600   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:59:52.214623   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:59:52.214646   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:59:52.214668   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:59:52.214690   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:59:52.214703   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:59:52.214721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.214735   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.214748   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.214762   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.214775   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.214788   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.215123   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.215148   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.215180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:59:52.215194   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.215222   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215233   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:59:52.215247   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215265   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.215283   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.215299   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.215312   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.215324   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.215340   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.215355   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.215372   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.215389   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.215401   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.215409   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215430   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215454   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215478   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215500   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215517   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215532   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215549   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:59:52.215565   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215578   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215593   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215606   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215622   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215643   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215667   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215688   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215712   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215762   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215839   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:59:52.215868   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215888   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:59:52.215912   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:59:52.215937   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215959   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215979   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:59:52.216007   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216027   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216051   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216067   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216084   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216097   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216112   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216128   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216141   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216157   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216171   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216195   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216222   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216243   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216263   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216276   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216289   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 dockerd[4014]: time="2025-12-29T06:58:43.458641345Z" level=info msg="ignoring event" container=07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216304   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.011072219Z" level=info msg="ignoring event" container=173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216318   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.102126666Z" level=info msg="ignoring event" container=6b7711ee25a2df71f8c7d296f7186875ebd6ab978a71d33f177de0cc3055645b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216331   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.266578298Z" level=info msg="ignoring event" container=a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216346   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.365376654Z" level=info msg="ignoring event" container=fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216365   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.452640794Z" level=info msg="ignoring event" container=4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216380   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.557330204Z" level=info msg="ignoring event" container=d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216392   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.666151542Z" level=info msg="ignoring event" container=0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216409   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.751481082Z" level=info msg="ignoring event" container=f48fc04e347519b276e239ee9a6b0b8e093862313e46174a1815efae670eec9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216427   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	I1229 06:59:52.216440   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	I1229 06:59:52.216455   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216467   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216484   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	I1229 06:59:52.216495   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	I1229 06:59:52.216512   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	I1229 06:59:52.216525   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	I1229 06:59:52.216542   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:59:52.216554   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	I1229 06:59:52.216568   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:59:52.216582   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	I1229 06:59:52.216596   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216611   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216628   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	I1229 06:59:52.216642   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	I1229 06:59:52.216660   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:59:52.216673   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	I1229 06:59:52.238629   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:59:52.238668   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:59:52.287732   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	W1229 06:59:52.290016   17440 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 06:59:52.290080   17440 out.go:285] * 
	W1229 06:59:52.290145   17440 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.290156   17440 out.go:285] * 
	W1229 06:59:52.290452   17440 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:59:52.293734   17440 out.go:203] 
	W1229 06:59:52.295449   17440 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.295482   17440 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 06:59:52.295500   17440 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 06:59:52.296904   17440 out.go:203] 
	
	
	==> Docker <==
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 06:59:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:59:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> kernel <==
	 07:01:09 up 8 min,  0 users,  load average: 0.03, 0.23, 0.16
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.784701968s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (484.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (153.45s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-695625 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-695625 get po -A: exit status 1 (1m0.086284502s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-695625 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\n"*: args "kubectl --context functional-695625 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-695625 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.788044409s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
E1229 07:03:43.100474   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m1.118733926s)
helpers_test.go:261: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ addons-909246 ssh cat /opt/local-path-provisioner/pvc-60e48b23-4f43-4f44-8576-c979927d0800_default_test-pvc/file1 │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:50 UTC │
	│ addons  │ addons-909246 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:50 UTC │
	│ addons  │ addons-909246 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:50 UTC │
	│ addons  │ addons-909246 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:50 UTC │ 29 Dec 25 06:51 UTC │
	│ stop    │ -p addons-909246                                                                                                  │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ addons  │ enable dashboard -p addons-909246                                                                                 │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ addons  │ disable dashboard -p addons-909246                                                                                │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ addons  │ disable gvisor -p addons-909246                                                                                   │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ delete  │ -p addons-909246                                                                                                  │ addons-909246     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ start   │ -p nospam-039815 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-039815 --driver=kvm2                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ start   │ nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run                                                        │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │                     │
	│ start   │ nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run                                                        │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │                     │
	│ start   │ nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run                                                        │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │                     │
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:52 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ delete  │ -p nospam-039815                                                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ start   │ -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:53 UTC │
	│ start   │ -p functional-695625 --alsologtostderr -v=8                                                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:53:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:53:22.250786   17440 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:53:22.251073   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251082   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:53:22.251087   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251322   17440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 06:53:22.251807   17440 out.go:368] Setting JSON to false
	I1229 06:53:22.252599   17440 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2152,"bootTime":1766989050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:53:22.252669   17440 start.go:143] virtualization: kvm guest
	I1229 06:53:22.254996   17440 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:53:22.256543   17440 notify.go:221] Checking for updates...
	I1229 06:53:22.256551   17440 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:53:22.258115   17440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:53:22.259464   17440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:22.260823   17440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:53:22.262461   17440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:53:22.263830   17440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:53:22.265499   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.265604   17440 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:53:22.301877   17440 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 06:53:22.303062   17440 start.go:309] selected driver: kvm2
	I1229 06:53:22.303099   17440 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.303255   17440 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:53:22.304469   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:22.304541   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:22.304607   17440 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.304716   17440 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:53:22.306617   17440 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 06:53:22.307989   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:22.308028   17440 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 06:53:22.308037   17440 cache.go:65] Caching tarball of preloaded images
	I1229 06:53:22.308172   17440 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 06:53:22.308185   17440 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 06:53:22.308288   17440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 06:53:22.308499   17440 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 06:53:22.308543   17440 start.go:364] duration metric: took 25.28µs to acquireMachinesLock for "functional-695625"
	I1229 06:53:22.308555   17440 start.go:96] Skipping create...Using existing machine configuration
	I1229 06:53:22.308560   17440 fix.go:54] fixHost starting: 
	I1229 06:53:22.310738   17440 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 06:53:22.310765   17440 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 06:53:22.313927   17440 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 06:53:22.313960   17440 machine.go:94] provisionDockerMachine start ...
	I1229 06:53:22.317184   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317690   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.317748   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317941   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.318146   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.318156   17440 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 06:53:22.424049   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.424102   17440 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 06:53:22.427148   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427685   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.427715   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427957   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.428261   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.428280   17440 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 06:53:22.552563   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.555422   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.555807   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.555834   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.556061   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.556278   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.556302   17440 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 06:53:22.661438   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:22.661470   17440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 06:53:22.661505   17440 buildroot.go:174] setting up certificates
	I1229 06:53:22.661529   17440 provision.go:84] configureAuth start
	I1229 06:53:22.664985   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.665439   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.665459   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.667758   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668124   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.668145   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668257   17440 provision.go:143] copyHostCerts
	I1229 06:53:22.668280   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668308   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 06:53:22.668317   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668383   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 06:53:22.668476   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668505   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 06:53:22.668512   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668541   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 06:53:22.668582   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668598   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 06:53:22.668603   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668632   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 06:53:22.668676   17440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 06:53:22.746489   17440 provision.go:177] copyRemoteCerts
	I1229 06:53:22.746545   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 06:53:22.749128   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749596   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.749616   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749757   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:22.836885   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 06:53:22.836959   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 06:53:22.872390   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 06:53:22.872481   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 06:53:22.908829   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 06:53:22.908896   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 06:53:22.941014   17440 provision.go:87] duration metric: took 279.457536ms to configureAuth
	I1229 06:53:22.941053   17440 buildroot.go:189] setting minikube options for container-runtime
	I1229 06:53:22.941277   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.944375   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.944857   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.944916   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.945128   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.945387   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.945402   17440 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 06:53:23.052106   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 06:53:23.052136   17440 buildroot.go:70] root file system type: tmpfs
	I1229 06:53:23.052304   17440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 06:53:23.055887   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056416   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.056446   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056629   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.056893   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.056961   17440 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 06:53:23.183096   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 06:53:23.186465   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.186943   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.187006   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.187227   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.187475   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.187494   17440 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 06:53:23.306011   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:23.306077   17440 machine.go:97] duration metric: took 992.109676ms to provisionDockerMachine
	I1229 06:53:23.306099   17440 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 06:53:23.306114   17440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 06:53:23.306201   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 06:53:23.309537   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.309944   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.309967   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.310122   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.393657   17440 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 06:53:23.398689   17440 command_runner.go:130] > NAME=Buildroot
	I1229 06:53:23.398723   17440 command_runner.go:130] > VERSION=2025.02
	I1229 06:53:23.398731   17440 command_runner.go:130] > ID=buildroot
	I1229 06:53:23.398737   17440 command_runner.go:130] > VERSION_ID=2025.02
	I1229 06:53:23.398745   17440 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1229 06:53:23.398791   17440 info.go:137] Remote host: Buildroot 2025.02
	I1229 06:53:23.398821   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 06:53:23.398897   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 06:53:23.398981   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 06:53:23.398993   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 06:53:23.399068   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 06:53:23.399075   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> /etc/test/nested/copy/13486/hosts
	I1229 06:53:23.399114   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 06:53:23.412045   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:23.445238   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 06:53:23.479048   17440 start.go:296] duration metric: took 172.930561ms for postStartSetup
	I1229 06:53:23.479099   17440 fix.go:56] duration metric: took 1.170538464s for fixHost
	I1229 06:53:23.482307   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.482761   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.482808   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.483049   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.483313   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.483327   17440 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 06:53:23.586553   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766991203.580410695
	
	I1229 06:53:23.586572   17440 fix.go:216] guest clock: 1766991203.580410695
	I1229 06:53:23.586579   17440 fix.go:229] Guest: 2025-12-29 06:53:23.580410695 +0000 UTC Remote: 2025-12-29 06:53:23.479103806 +0000 UTC m=+1.278853461 (delta=101.306889ms)
	I1229 06:53:23.586594   17440 fix.go:200] guest clock delta is within tolerance: 101.306889ms
	I1229 06:53:23.586598   17440 start.go:83] releasing machines lock for "functional-695625", held for 1.278049275s
	I1229 06:53:23.590004   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.590438   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.590463   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.591074   17440 ssh_runner.go:195] Run: cat /version.json
	I1229 06:53:23.591186   17440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 06:53:23.594362   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594454   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594831   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.594868   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594954   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.595021   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.595083   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.595278   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.692873   17440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1229 06:53:23.692948   17440 command_runner.go:130] > {"iso_version": "v1.37.0-1766979747-22353", "kicbase_version": "v0.0.48-1766884053-22351", "minikube_version": "v1.37.0", "commit": "f5189b2bdbb6990e595e25e06a017f8901d29fa8"}
	I1229 06:53:23.693063   17440 ssh_runner.go:195] Run: systemctl --version
	I1229 06:53:23.700357   17440 command_runner.go:130] > systemd 256 (256.7)
	I1229 06:53:23.700393   17440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1229 06:53:23.700501   17440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 06:53:23.707230   17440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1229 06:53:23.707369   17440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 06:53:23.707433   17440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 06:53:23.719189   17440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 06:53:23.719220   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:23.719246   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:23.719351   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:23.744860   17440 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1229 06:53:23.744940   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 06:53:23.758548   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 06:53:23.773051   17440 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 06:53:23.773122   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 06:53:23.786753   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.800393   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 06:53:23.813395   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.826600   17440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 06:53:23.840992   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 06:53:23.854488   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 06:53:23.869084   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 06:53:23.882690   17440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 06:53:23.894430   17440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1229 06:53:23.894542   17440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 06:53:23.912444   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:24.139583   17440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 06:53:24.191402   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:24.191457   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:24.191521   17440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 06:53:24.217581   17440 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1229 06:53:24.217604   17440 command_runner.go:130] > [Unit]
	I1229 06:53:24.217609   17440 command_runner.go:130] > Description=Docker Application Container Engine
	I1229 06:53:24.217615   17440 command_runner.go:130] > Documentation=https://docs.docker.com
	I1229 06:53:24.217626   17440 command_runner.go:130] > After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1229 06:53:24.217631   17440 command_runner.go:130] > Wants=network-online.target containerd.service
	I1229 06:53:24.217635   17440 command_runner.go:130] > Requires=docker.socket
	I1229 06:53:24.217638   17440 command_runner.go:130] > StartLimitBurst=3
	I1229 06:53:24.217642   17440 command_runner.go:130] > StartLimitIntervalSec=60
	I1229 06:53:24.217646   17440 command_runner.go:130] > [Service]
	I1229 06:53:24.217649   17440 command_runner.go:130] > Type=notify
	I1229 06:53:24.217653   17440 command_runner.go:130] > Restart=always
	I1229 06:53:24.217660   17440 command_runner.go:130] > ExecStart=
	I1229 06:53:24.217694   17440 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1229 06:53:24.217710   17440 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1229 06:53:24.217748   17440 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1229 06:53:24.217761   17440 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1229 06:53:24.217767   17440 command_runner.go:130] > LimitNOFILE=infinity
	I1229 06:53:24.217782   17440 command_runner.go:130] > LimitNPROC=infinity
	I1229 06:53:24.217790   17440 command_runner.go:130] > LimitCORE=infinity
	I1229 06:53:24.217818   17440 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1229 06:53:24.217828   17440 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1229 06:53:24.217833   17440 command_runner.go:130] > TasksMax=infinity
	I1229 06:53:24.217840   17440 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1229 06:53:24.217847   17440 command_runner.go:130] > Delegate=yes
	I1229 06:53:24.217855   17440 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1229 06:53:24.217864   17440 command_runner.go:130] > KillMode=process
	I1229 06:53:24.217871   17440 command_runner.go:130] > OOMScoreAdjust=-500
	I1229 06:53:24.217881   17440 command_runner.go:130] > [Install]
	I1229 06:53:24.217896   17440 command_runner.go:130] > WantedBy=multi-user.target
	I1229 06:53:24.217973   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.255457   17440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 06:53:24.293449   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.313141   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 06:53:24.332090   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:24.359168   17440 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1229 06:53:24.359453   17440 ssh_runner.go:195] Run: which cri-dockerd
	I1229 06:53:24.364136   17440 command_runner.go:130] > /usr/bin/cri-dockerd
	I1229 06:53:24.364255   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 06:53:24.377342   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 06:53:24.400807   17440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 06:53:24.632265   17440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 06:53:24.860401   17440 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 06:53:24.860544   17440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 06:53:24.885002   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 06:53:24.902479   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:25.138419   17440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 06:53:48.075078   17440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.936617903s)
	I1229 06:53:48.075181   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 06:53:48.109404   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 06:53:48.160259   17440 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 06:53:48.213352   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:48.231311   17440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 06:53:48.408709   17440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 06:53:48.584722   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.754219   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 06:53:48.798068   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 06:53:48.815248   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.983637   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 06:53:49.117354   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:49.139900   17440 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 06:53:49.139985   17440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 06:53:49.146868   17440 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1229 06:53:49.146900   17440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1229 06:53:49.146910   17440 command_runner.go:130] > Device: 0,23	Inode: 2092        Links: 1
	I1229 06:53:49.146918   17440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1229 06:53:49.146926   17440 command_runner.go:130] > Access: 2025-12-29 06:53:49.121969518 +0000
	I1229 06:53:49.146933   17440 command_runner.go:130] > Modify: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146940   17440 command_runner.go:130] > Change: 2025-12-29 06:53:49.012958222 +0000
	I1229 06:53:49.146947   17440 command_runner.go:130] >  Birth: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146986   17440 start.go:574] Will wait 60s for crictl version
	I1229 06:53:49.147040   17440 ssh_runner.go:195] Run: which crictl
	I1229 06:53:49.152717   17440 command_runner.go:130] > /usr/bin/crictl
	I1229 06:53:49.152823   17440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 06:53:49.184154   17440 command_runner.go:130] > Version:  0.1.0
	I1229 06:53:49.184179   17440 command_runner.go:130] > RuntimeName:  docker
	I1229 06:53:49.184183   17440 command_runner.go:130] > RuntimeVersion:  28.5.2
	I1229 06:53:49.184188   17440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1229 06:53:49.184211   17440 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 06:53:49.184266   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.212414   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.213969   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.257526   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.262261   17440 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 06:53:49.266577   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267255   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:49.267298   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267633   17440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 06:53:49.286547   17440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1229 06:53:49.286686   17440 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 06:53:49.286896   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:49.286965   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.324994   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.325029   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.325037   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.325045   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.325052   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.325060   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.325067   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.325074   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.325113   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.325127   17440 docker.go:624] Images already preloaded, skipping extraction
	I1229 06:53:49.325191   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.352256   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.352294   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.352301   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.352309   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.352315   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.352323   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.352349   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.352361   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.352398   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.352412   17440 cache_images.go:86] Images are preloaded, skipping loading
	I1229 06:53:49.352427   17440 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 06:53:49.352542   17440 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 06:53:49.352611   17440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 06:53:49.466471   17440 command_runner.go:130] > systemd
	I1229 06:53:49.469039   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:49.469084   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:49.469108   17440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 06:53:49.469137   17440 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 06:53:49.469275   17440 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 06:53:49.469338   17440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 06:53:49.495545   17440 command_runner.go:130] > kubeadm
	I1229 06:53:49.495573   17440 command_runner.go:130] > kubectl
	I1229 06:53:49.495580   17440 command_runner.go:130] > kubelet
	I1229 06:53:49.495602   17440 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 06:53:49.495647   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 06:53:49.521658   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 06:53:49.572562   17440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 06:53:49.658210   17440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1229 06:53:49.740756   17440 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 06:53:49.746333   17440 command_runner.go:130] > 192.168.39.121	control-plane.minikube.internal
	I1229 06:53:49.746402   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:50.073543   17440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:53:50.148789   17440 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 06:53:50.148837   17440 certs.go:195] generating shared ca certs ...
	I1229 06:53:50.148860   17440 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:53:50.149082   17440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 06:53:50.149152   17440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 06:53:50.149169   17440 certs.go:257] generating profile certs ...
	I1229 06:53:50.149320   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 06:53:50.149413   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 06:53:50.149478   17440 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 06:53:50.149490   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 06:53:50.149508   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 06:53:50.149525   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 06:53:50.149541   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 06:53:50.149556   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 06:53:50.149573   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 06:53:50.149588   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 06:53:50.149607   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 06:53:50.149673   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 06:53:50.149723   17440 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 06:53:50.149738   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 06:53:50.149776   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 06:53:50.149837   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 06:53:50.149873   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 06:53:50.149950   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:50.150003   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:50.150023   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 06:53:50.150038   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 06:53:50.150853   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 06:53:50.233999   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 06:53:50.308624   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 06:53:50.436538   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 06:53:50.523708   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 06:53:50.633239   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 06:53:50.746852   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 06:53:50.793885   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 06:53:50.894956   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 06:53:50.955149   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 06:53:51.018694   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 06:53:51.084938   17440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 06:53:51.127238   17440 ssh_runner.go:195] Run: openssl version
	I1229 06:53:51.136812   17440 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1229 06:53:51.136914   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.154297   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 06:53:51.175503   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182560   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182600   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182653   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.195355   17440 command_runner.go:130] > b5213941
	I1229 06:53:51.195435   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 06:53:51.217334   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.233542   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 06:53:51.248778   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255758   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255826   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255874   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.272983   17440 command_runner.go:130] > 51391683
	I1229 06:53:51.273077   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 06:53:51.303911   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.325828   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 06:53:51.347788   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360429   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360567   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360625   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.369235   17440 command_runner.go:130] > 3ec20f2e
	I1229 06:53:51.369334   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 06:53:51.381517   17440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387517   17440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387548   17440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1229 06:53:51.387554   17440 command_runner.go:130] > Device: 253,1	Inode: 1052441     Links: 1
	I1229 06:53:51.387560   17440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1229 06:53:51.387568   17440 command_runner.go:130] > Access: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387572   17440 command_runner.go:130] > Modify: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387577   17440 command_runner.go:130] > Change: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387581   17440 command_runner.go:130] >  Birth: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387657   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 06:53:51.396600   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.397131   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 06:53:51.410180   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.410283   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 06:53:51.419062   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.419164   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 06:53:51.431147   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.431222   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 06:53:51.441881   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.442104   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 06:53:51.450219   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.450295   17440 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:51.450396   17440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.474716   17440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 06:53:51.489086   17440 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1229 06:53:51.489107   17440 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1229 06:53:51.489113   17440 command_runner.go:130] > /var/lib/minikube/etcd:
	I1229 06:53:51.489117   17440 command_runner.go:130] > member
	I1229 06:53:51.489676   17440 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 06:53:51.489694   17440 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 06:53:51.489753   17440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 06:53:51.503388   17440 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:51.503948   17440 kubeconfig.go:125] found "functional-695625" server: "https://192.168.39.121:8441"
	I1229 06:53:51.504341   17440 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:51.504505   17440 kapi.go:59] client config for functional-695625: &rest.Config{Host:"https://192.168.39.121:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 06:53:51.504963   17440 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 06:53:51.504986   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 06:53:51.504992   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 06:53:51.504998   17440 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 06:53:51.505004   17440 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 06:53:51.505012   17440 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 06:53:51.505089   17440 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 06:53:51.505414   17440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 06:53:51.521999   17440 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.121
	I1229 06:53:51.522047   17440 kubeadm.go:1161] stopping kube-system containers ...
	I1229 06:53:51.522115   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.550376   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.550407   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:51.550415   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:51.550422   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:51.550432   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:51.550441   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:51.550448   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:51.550455   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:51.550462   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:51.550470   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:51.550478   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:51.550485   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:51.550491   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:51.550496   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:51.550505   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:51.550511   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:51.550516   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:51.550521   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:51.550528   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:51.550540   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:51.550548   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:51.550556   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:51.550566   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:51.550572   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:51.550582   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:51.552420   17440 docker.go:487] Stopping containers: [6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629]
	I1229 06:53:51.552499   17440 ssh_runner.go:195] Run: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629
	I1229 06:53:51.976888   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.976911   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:58.789216   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:58.789240   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:58.789248   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:58.789252   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:58.789256   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:58.789259   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:58.789262   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:58.789266   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:58.789269   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:58.789272   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:58.789275   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:58.789278   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:58.789281   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:58.789284   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:58.789287   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:58.789295   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:58.789299   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:58.789303   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:58.789306   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:58.789310   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:58.789314   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:58.789317   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:58.789321   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:58.790986   17440 ssh_runner.go:235] Completed: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629: (7.238443049s)
	I1229 06:53:58.791057   17440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 06:53:58.833953   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857522   17440 command_runner.go:130] > -rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	I1229 06:53:58.857550   17440 command_runner.go:130] > -rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.857561   17440 command_runner.go:130] > -rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.857571   17440 command_runner.go:130] > -rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857610   17440 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	
	I1229 06:53:58.857671   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:53:58.875294   17440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1229 06:53:58.876565   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.896533   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.896617   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:53:58.917540   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.936703   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.936777   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.957032   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.970678   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.970742   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:53:58.992773   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:53:59.007767   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.061402   17440 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 06:53:59.061485   17440 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1229 06:53:59.061525   17440 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1229 06:53:59.061923   17440 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 06:53:59.062217   17440 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1229 06:53:59.062329   17440 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1229 06:53:59.062606   17440 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1229 06:53:59.062852   17440 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1229 06:53:59.062948   17440 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1229 06:53:59.063179   17440 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 06:53:59.063370   17440 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 06:53:59.063615   17440 command_runner.go:130] > [certs] Using the existing "sa" key
	I1229 06:53:59.066703   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.686012   17440 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 06:53:59.686050   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1229 06:53:59.686059   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1229 06:53:59.686069   17440 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 06:53:59.686078   17440 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 06:53:59.686087   17440 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 06:53:59.686203   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.995495   17440 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 06:53:59.995529   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 06:53:59.995539   17440 command_runner.go:130] > [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 06:53:59.995545   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 06:53:59.995549   17440 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1229 06:53:59.995615   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.047957   17440 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 06:54:00.047983   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 06:54:00.053966   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 06:54:00.056537   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 06:54:00.059558   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.175745   17440 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 06:54:00.175825   17440 api_server.go:52] waiting for apiserver process to appear ...
	I1229 06:54:00.175893   17440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:00.233895   17440 command_runner.go:130] > 2416
	I1229 06:54:00.233940   17440 api_server.go:72] duration metric: took 58.126409ms to wait for apiserver process to appear ...
	I1229 06:54:00.233953   17440 api_server.go:88] waiting for apiserver healthz status ...
	I1229 06:54:00.233976   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:05.236821   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:05.236865   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:10.239922   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:10.239956   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:15.242312   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:15.242347   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:20.245667   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:20.245726   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:25.248449   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:25.248501   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:30.249241   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:30.249279   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:35.251737   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:35.251771   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:40.254366   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:40.254407   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:45.257232   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:45.257275   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:50.259644   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:50.259685   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:55.261558   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:55.261592   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:55:00.263123   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:55:00.263241   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:55:00.287429   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:55:00.288145   17440 logs.go:282] 1 containers: [fb6db97d8ffe]
	I1229 06:55:00.288289   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:55:00.310519   17440 command_runner.go:130] > d81259f64136
	I1229 06:55:00.310561   17440 logs.go:282] 1 containers: [d81259f64136]
	I1229 06:55:00.310630   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:55:00.334579   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:55:00.334624   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:55:00.334692   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:55:00.353472   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:55:00.353503   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:55:00.354626   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:55:00.354714   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:55:00.376699   17440 command_runner.go:130] > 8911777281f4
	I1229 06:55:00.378105   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:55:00.378188   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:55:00.397976   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:55:00.399617   17440 logs.go:282] 1 containers: [17fe16a2822a]
	I1229 06:55:00.399707   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:55:00.419591   17440 logs.go:282] 0 containers: []
	W1229 06:55:00.419617   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:55:00.419665   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:55:00.440784   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:55:00.441985   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:55:00.442020   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:55:00.442030   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:55:00.465151   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.465192   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:55:00.465226   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.465237   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:55:00.465255   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.465271   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:55:00.465285   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.465823   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:55:00.465845   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:55:00.487618   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:55:00.487646   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:55:00.508432   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.508468   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:55:00.508482   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:55:00.508508   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:55:00.508521   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:55:00.508529   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.508541   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:55:00.508551   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.508560   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:55:00.508568   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.510308   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:55:00.510337   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:55:00.531862   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.532900   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:55:00.532924   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:55:00.554051   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:55:00.554084   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.554095   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:55:00.554109   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:55:00.554131   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:55:00.554148   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:55:00.554170   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:55:00.554189   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:55:00.554195   17440 command_runner.go:130] !  >
	I1229 06:55:00.554208   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:55:00.554224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:55:00.554250   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:55:00.554261   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:55:00.554273   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.554316   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:55:00.554327   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:55:00.554339   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:55:00.554350   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:55:00.554366   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:55:00.554381   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:55:00.554390   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:55:00.554402   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:55:00.554414   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:55:00.554427   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:55:00.554437   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:55:00.554452   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:55:00.556555   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:55:00.556578   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:55:00.581812   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:55:00.581848   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:55:00.581857   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:55:00.581865   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581874   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581881   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:55:00.581890   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581911   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:55:00.581919   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581930   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581942   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581949   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581957   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581964   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581975   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581985   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581993   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582003   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582010   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582020   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582030   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582037   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582044   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582051   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582070   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582080   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582088   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582097   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582105   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582115   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582125   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582141   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582152   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582160   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582170   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582177   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582186   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582193   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582203   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582211   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582221   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582228   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582235   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582242   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582252   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582261   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582269   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582276   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582287   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582294   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582302   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582312   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582319   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582329   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582336   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582346   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582353   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582363   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582370   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582378   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582385   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.586872   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:55:00.586916   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:55:00.609702   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.609731   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.609766   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.609784   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.609811   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.609822   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:55:00.609831   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:55:00.609842   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609848   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.609857   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:55:00.609865   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.609879   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.609890   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.609906   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.609915   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.609923   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:55:00.609943   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.609954   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:55:00.609966   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.609976   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.609983   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:55:00.609990   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609998   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610006   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610016   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610024   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610041   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610050   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610070   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610082   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610091   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610100   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610107   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:55:00.610115   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610123   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610131   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610141   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610159   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610168   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610179   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.610191   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.610203   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610216   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610223   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610231   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610242   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:55:00.610251   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610258   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610265   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610271   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:55:00.610290   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610303   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610323   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610335   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610345   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610355   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610374   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610384   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610394   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610404   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610412   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610422   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610429   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610439   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610447   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610455   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610470   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.610476   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610483   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610491   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610500   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610508   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610516   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.610523   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610531   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610538   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610550   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610559   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610567   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610573   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610579   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610595   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610607   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:55:00.610615   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610622   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610630   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610637   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610644   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:55:00.610653   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610669   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610680   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610692   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610705   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610713   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610735   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610744   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610755   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610765   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610772   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610781   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610789   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610809   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610818   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610824   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610853   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610867   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610881   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610896   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610909   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:55:00.610922   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610936   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610949   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610964   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610979   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.610995   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611010   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611021   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611037   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.611048   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.611062   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.611070   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.611079   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:55:00.611087   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.611096   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.611102   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.611109   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:55:00.611118   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.611125   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:55:00.611135   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.611146   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.611157   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.611167   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.611179   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.611186   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:55:00.611199   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611226   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611266   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611281   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611295   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611310   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611325   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611342   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611355   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611370   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611382   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611404   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.611417   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:55:00.611435   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:55:00.611449   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:55:00.611464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:55:00.611476   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:55:00.611491   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:55:00.611502   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:55:00.611517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:55:00.611529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:55:00.611544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:55:00.611558   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:55:00.611574   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:55:00.611586   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:55:00.611601   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:55:00.611617   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:55:00.611631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:55:00.611645   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:55:00.611660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:55:00.611674   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:55:00.611689   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:55:00.611702   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:55:00.611712   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:55:00.611722   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.611732   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.611740   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.611751   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.611759   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.611767   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.611835   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.611849   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.611867   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:55:00.611877   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611888   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.611894   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.611901   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:55:00.611909   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611917   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.611929   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.611937   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.611946   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.611954   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.611963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.611971   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.611981   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.611990   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.611999   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.612006   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.612019   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612031   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612046   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612063   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612079   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612093   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612112   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:55:00.612128   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612142   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612157   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612171   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612185   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612201   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612217   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612230   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612245   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612259   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612274   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612293   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:55:00.612309   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612323   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:55:00.612338   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:55:00.612354   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612366   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612380   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:55:00.612394   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.612407   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:55:00.629261   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:55:00.629293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:55:00.671242   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:55:00.671279   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       About a minute ago   Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671293   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:55:00.671303   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       About a minute ago   Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:55:00.671315   17440 command_runner.go:130] > fb6db97d8ffe4       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            1                   4ed2797334771       kube-apiserver-functional-695625            kube-system
	I1229 06:55:00.671327   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:55:00.671337   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       About a minute ago   Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671347   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:55:00.671362   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       2 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:55:00.673604   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:55:00.673628   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:55:00.695836   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077121    2634 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.695863   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077418    2634 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.695877   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077955    2634 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.695887   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.109084    2634 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.695901   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.135073    2634 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.695910   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137245    2634 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.695920   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137294    2634 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.695934   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.137340    2634 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.695942   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209773    2634 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.695952   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209976    2634 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.695962   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210050    2634 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.695975   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210361    2634 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.696001   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210374    2634 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.696011   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210392    2634 policy_none.go:50] "Start"
	I1229 06:55:00.696020   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210408    2634 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.696029   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210421    2634 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696038   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210527    2634 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696045   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210534    2634 policy_none.go:44] "Start"
	I1229 06:55:00.696056   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.219245    2634 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.696067   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220437    2634 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.696078   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220456    2634 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.696089   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.221071    2634 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.696114   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.226221    2634 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.696126   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239387    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696144   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239974    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696155   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.240381    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696165   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.262510    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696185   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283041    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696208   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283087    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696228   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283118    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696247   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283135    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696268   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283151    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696288   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283163    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696309   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283175    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696329   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283189    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696357   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283209    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696378   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283223    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696400   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283249    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696416   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.285713    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696428   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290012    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696442   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290269    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696454   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.304300    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696466   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.336817    2634 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.696475   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351321    2634 kubelet_node_status.go:123] "Node was previously registered" node="functional-695625"
	I1229 06:55:00.696486   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351415    2634 kubelet_node_status.go:77] "Successfully registered node" node="functional-695625"
	I1229 06:55:00.696493   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.033797    2634 apiserver.go:52] "Watching apiserver"
	I1229 06:55:00.696503   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.077546    2634 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1229 06:55:00.696527   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.181689    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-functional-695625" podStartSLOduration=3.181660018 podStartE2EDuration="3.181660018s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.180947341 +0000 UTC m=+1.223544146" watchObservedRunningTime="2025-12-29 06:52:42.181660018 +0000 UTC m=+1.224256834"
	I1229 06:55:00.696555   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.221952    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-functional-695625" podStartSLOduration=3.221936027 podStartE2EDuration="3.221936027s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.202120755 +0000 UTC m=+1.244717560" watchObservedRunningTime="2025-12-29 06:52:42.221936027 +0000 UTC m=+1.264532905"
	I1229 06:55:00.696583   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238774    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-695625" podStartSLOduration=3.238759924 podStartE2EDuration="3.238759924s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.238698819 +0000 UTC m=+1.281295638" watchObservedRunningTime="2025-12-29 06:52:42.238759924 +0000 UTC m=+1.281356744"
	I1229 06:55:00.696609   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238905    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-functional-695625" podStartSLOduration=3.238868136 podStartE2EDuration="3.238868136s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.224445467 +0000 UTC m=+1.267042290" watchObservedRunningTime="2025-12-29 06:52:42.238868136 +0000 UTC m=+1.281464962"
	I1229 06:55:00.696622   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266475    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696634   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266615    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696651   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266971    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696664   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.267487    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696678   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287234    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696690   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287316    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696704   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.292837    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696718   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293863    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696730   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293764    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696745   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.294163    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696757   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298557    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696770   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298633    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696782   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.272537    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696807   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273148    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696835   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273501    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696850   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273627    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696863   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279056    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696877   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279353    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696887   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.754123    2634 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1229 06:55:00.696899   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.756083    2634 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1229 06:55:00.696917   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.407560    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mg5\" (UniqueName: \"kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696938   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408503    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-proxy\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696958   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408957    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-xtables-lock\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696976   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.409131    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-lib-modules\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696991   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528153    2634 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697004   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528186    2634 projected.go:196] Error preparing data for projected volume kube-api-access-94mg5 for pod kube-system/kube-proxy-g7lp9: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697032   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528293    2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5 podName:9c2c2ac1-7fa0-427d-b78e-ee14e169895a nodeName:}" failed. No retries permitted until 2025-12-29 06:52:46.028266861 +0000 UTC m=+5.070863673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94mg5" (UniqueName: "kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5") pod "kube-proxy-g7lp9" (UID: "9c2c2ac1-7fa0-427d-b78e-ee14e169895a") : configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697044   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:55:00.697064   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697084   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697104   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697124   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697138   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:55:00.697151   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.697170   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697192   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697206   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697229   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:55:00.697245   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697268   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:55:00.697296   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:55:00.697310   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697322   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697336   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697361   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:55:00.697376   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.697388   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697402   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.697420   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697442   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697463   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:55:00.697483   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:55:00.697499   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697515   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697526   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697536   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697554   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697576   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697591   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:55:00.697608   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697621   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697637   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697651   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697669   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697710   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697726   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697746   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697762   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697775   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697790   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697815   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697830   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697846   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697860   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697876   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697892   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697906   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697923   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697937   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697951   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697966   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697986   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698002   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698029   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698046   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698060   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698081   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698097   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698113   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698126   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698141   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698155   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698172   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698187   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698202   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698218   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698235   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698274   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698293   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698309   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698325   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698341   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698362   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698378   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698395   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698408   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698424   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698439   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698455   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698469   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698484   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698501   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698514   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698527   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698541   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698554   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698577   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698590   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698606   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698620   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698634   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698650   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698666   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698682   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698696   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698711   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698727   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698743   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698756   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698769   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698784   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698808   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698823   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698840   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698853   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698868   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698886   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698903   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698916   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698933   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698948   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698962   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698976   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698993   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:55:00.699007   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.699018   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699031   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699042   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699067   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.699078   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699105   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699119   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699145   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699157   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:55:00.699195   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699207   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:55:00.699224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:55:00.699256   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699269   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699284   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.699330   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699343   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:55:00.699362   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699380   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.699407   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:55:00.699439   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:55:00.699460   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:55:00.699477   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699497   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699515   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:55:00.699533   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.699619   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699640   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699709   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699722   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.699738   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699750   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.699763   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699774   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.699785   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699807   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.699820   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699834   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699846   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699861   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699872   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699886   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699931   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699946   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699956   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:55:00.699972   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700008   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700031   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700053   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700067   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.700078   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.700091   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:55:00.700102   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700116   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.700129   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700139   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700159   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700168   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:55:00.700179   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.700190   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700199   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700217   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700228   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700240   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.700250   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:55:00.700268   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700281   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700291   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700310   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700321   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700331   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700349   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700364   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700375   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700394   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700405   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700415   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700427   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700454   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700474   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700515   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700529   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700539   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700558   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700570   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700578   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:55:00.700584   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:55:00.700590   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700597   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:55:00.700603   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700612   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:55:00.700620   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.700631   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:55:00.700641   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:55:00.700652   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:55:00.700662   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:55:00.700674   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700684   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:55:00.700696   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:55:00.700707   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:55:00.700717   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:55:00.700758   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:55:00.700770   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:55:00.700779   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:55:00.700790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:55:00.700816   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:55:00.700831   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:55:00.700846   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:55:00.700858   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:55:00.700866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:55:00.700879   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:55:00.700891   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:55:00.700905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:55:00.700912   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:55:00.700921   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:55:00.700932   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.700943   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:55:00.700951   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:55:00.700963   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:55:00.700971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:55:00.700986   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:55:00.701000   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:55:00.701008   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:55:00.701020   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.701037   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.701046   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:55:00.701061   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.701073   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.701082   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.701093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.701100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.701114   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.701124   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701143   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701170   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.701178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.701188   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.701201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.701210   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.701218   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:55:00.701226   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.701237   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701246   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701256   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:55:00.701266   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.701277   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.701287   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.701297   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.701308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.701322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701334   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701348   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701361   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.701372   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.701385   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.701399   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.701410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.701422   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.701433   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701447   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.701458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.701471   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.701483   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.701496   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.701508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.701521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.701533   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701550   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701567   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:55:00.701581   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701592   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701611   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:55:00.701625   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701642   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:55:00.701678   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:55:00.701695   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:55:00.701705   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701716   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701735   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:55:00.701749   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701764   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:55:00.701780   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:55:00.701807   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701827   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701847   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701867   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701886   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.701907   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701928   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701971   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701995   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702014   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702027   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.755255   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:55:00.755293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:55:00.771031   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:55:00.771066   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:55:00.771079   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:55:00.771088   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:55:00.771097   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:55:00.771103   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:55:00.771109   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:55:00.771116   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:55:00.771121   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:55:00.771126   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:55:00.771131   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:55:00.771136   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:55:00.771143   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771153   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:55:00.771158   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:55:00.771165   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771175   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771185   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771191   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:55:00.771196   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:55:00.771202   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:55:00.772218   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:55:00.772246   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:56:00.863293   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:56:00.863340   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.091082059s)
	W1229 06:56:00.863385   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:56:00.863402   17440 logs.go:123] Gathering logs for kube-apiserver [fb6db97d8ffe] ...
	I1229 06:56:00.863420   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6db97d8ffe"
	I1229 06:56:00.897112   17440 command_runner.go:130] ! I1229 06:53:50.588377       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:56:00.897142   17440 command_runner.go:130] ! I1229 06:53:50.597275       1 server.go:150] Version: v1.35.0
	I1229 06:56:00.897153   17440 command_runner.go:130] ! I1229 06:53:50.597323       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:00.897164   17440 command_runner.go:130] ! E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:56:00.898716   17440 logs.go:138] Found kube-apiserver [fb6db97d8ffe] problem: E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.898738   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:00.898750   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:00.935530   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938590   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:00.938653   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:00.938666   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:00.938679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938689   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:00.938712   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.938728   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:00.938838   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:00.938875   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:00.938892   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:00.938902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:00.938913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:00.938922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:00.938935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:00.938946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:00.938958   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:00.938969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:00.938978   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:00.938993   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:00.939003   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:00.939022   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:00.939035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:00.939046   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:00.939053   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:00.939062   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:00.939071   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:00.939081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:00.939091   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:00.939111   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:00.939126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:00.939142   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:00.939162   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.939181   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:00.939213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.939249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:00.939258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:00.939274   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:00.939289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:00.939302   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:00.939324   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:00.939342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.939352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:00.939362   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:00.939377   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:00.939389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:00.939404   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:00.939423   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:00.939439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:00.939458   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939467   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:00.939478   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:00.939528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939544   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939564   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:00.939586   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939603   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939616   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:00.939882   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:00.939915   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939932   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:00.939947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:00.939960   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:00.939998   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.940030   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940064   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940122   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940150   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.940167   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:00.940187   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940204   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940257   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940277   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:00.940301   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940334   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940371   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940425   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940447   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940473   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.955065   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955108   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:56:00.955188   17440 out.go:285] X Problems detected in kube-apiserver [fb6db97d8ffe]:
	W1229 06:56:00.955202   17440 out.go:285]   E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.955209   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955215   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:56:10.957344   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:56:15.961183   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:56:15.961319   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:56:15.981705   17440 command_runner.go:130] > 18d0015c724a
	I1229 06:56:15.982641   17440 logs.go:282] 1 containers: [18d0015c724a]
	I1229 06:56:15.982732   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:56:16.002259   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:56:16.002292   17440 command_runner.go:130] > d81259f64136
	I1229 06:56:16.002322   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:56:16.002399   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:56:16.021992   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:56:16.022032   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:56:16.022113   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:56:16.048104   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:56:16.048133   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:56:16.049355   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:56:16.049441   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:56:16.071523   17440 command_runner.go:130] > 8911777281f4
	I1229 06:56:16.072578   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:56:16.072668   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:56:16.092921   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:56:16.092948   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:56:16.092975   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:56:16.093047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:56:16.113949   17440 logs.go:282] 0 containers: []
	W1229 06:56:16.113983   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:56:16.114047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:56:16.135700   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:56:16.135739   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:56:16.135766   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:56:16.135786   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:56:16.152008   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:56:16.152038   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:56:16.152046   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:56:16.152054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:56:16.152063   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:56:16.152069   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:56:16.152076   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:56:16.152081   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:56:16.152086   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:56:16.152091   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:56:16.152096   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:56:16.152102   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:56:16.152107   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152112   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:56:16.152119   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:56:16.152128   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152148   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152164   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152180   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:56:16.152190   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:56:16.152201   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:56:16.152209   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:56:16.152217   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:56:16.153163   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:56:16.153192   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:56:16.174824   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:56:16.174856   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.174862   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:56:16.174873   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:56:16.174892   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:56:16.174900   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:56:16.174913   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:56:16.174920   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:56:16.174924   17440 command_runner.go:130] !  >
	I1229 06:56:16.174931   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:56:16.174941   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:56:16.174957   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:56:16.174966   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:56:16.174975   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.174985   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:56:16.174994   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:56:16.175003   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:56:16.175012   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:56:16.175024   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:56:16.175033   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:56:16.175040   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:56:16.175050   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:56:16.175074   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:56:16.175325   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:56:16.175351   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:56:16.175362   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:56:16.177120   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:56:16.177144   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:56:16.222627   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:56:16.222665   17440 command_runner.go:130] > 18d0015c724a8       5c6acd67e9cd1       5 seconds ago       Exited              kube-apiserver            3                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:56:16.222684   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       16 seconds ago      Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222707   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       16 seconds ago      Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:56:16.222730   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       2 minutes ago       Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222749   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       2 minutes ago       Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:56:16.222768   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       2 minutes ago       Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:56:16.222810   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       2 minutes ago       Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222831   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       2 minutes ago       Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222851   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:56:16.222879   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       3 minutes ago       Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:56:16.225409   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:56:16.225439   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:56:16.247416   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247449   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.247516   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.247533   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.247545   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247555   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.247582   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.247605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.247698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.247722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:56:16.247733   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.247745   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:56:16.247753   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:56:16.247762   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.247774   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.247782   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.247807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:56:16.247816   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.247825   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.247840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.247851   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.247867   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.247878   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.247886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.247893   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.247902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.247914   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:56:16.247924   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:56:16.247935   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.247952   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.247965   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.247975   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.247988   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.248000   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.248016   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.248035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.248063   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.248074   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.248083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.248092   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.248103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.248113   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.248126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.248136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.248153   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.248166   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.248177   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:56:16.248186   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:56:16.248198   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.248208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248219   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:56:16.248228   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248240   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:56:16.248260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.248287   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248295   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248304   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.248312   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.248320   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248341   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.248352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.254841   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:16.254869   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:16.278647   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.278723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.278736   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.278750   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278759   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.278780   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.278809   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.278890   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.278913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:16.278923   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.278935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:16.278946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:16.278957   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.278971   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.278982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.278996   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:16.279006   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.279014   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.279031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.279040   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.279072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.279083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.279091   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.279101   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.279110   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.279121   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:16.279132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:16.279142   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.279159   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.279173   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.279183   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.279195   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.279226   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.279249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.279260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.279275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.279289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.279300   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.279313   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.279322   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279332   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.279343   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.279359   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.279374   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.279386   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:16.279396   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:16.279406   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:16.279418   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279429   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:16.279439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279451   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279460   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:16.279469   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279479   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.279503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279523   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.279531   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.279541   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279551   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.279562   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.279570   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:16.279585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.279603   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279622   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279661   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279676   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279688   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:16.279698   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279711   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279730   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279741   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:16.279751   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279764   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279785   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279805   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279825   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279836   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279852   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.287590   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:56:16.287613   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:56:16.310292   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:56:16.310320   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:56:16.331009   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:56:16.331044   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:56:16.331054   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:56:16.331067   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331076   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331083   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:56:16.331093   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331114   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:56:16.331232   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331256   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331268   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331275   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331289   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331298   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331316   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331329   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331341   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331355   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331363   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331374   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331386   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331400   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331413   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331425   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331441   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331454   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331468   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331478   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331488   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331496   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331506   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331519   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331529   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331537   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331547   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331555   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331564   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331572   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331580   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331592   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331604   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331618   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331629   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331645   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331659   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331673   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331689   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331703   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331716   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331728   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331740   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331756   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331771   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331784   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331816   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331830   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331847   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331863   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331879   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331894   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331908   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.336243   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:56:16.336267   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:56:16.358115   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358145   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358155   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358165   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358177   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.358186   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:56:16.358194   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:56:16.358203   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358209   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358220   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:56:16.358229   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358241   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358254   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358266   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358278   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358285   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358307   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358315   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358328   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358336   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358343   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:56:16.358350   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358360   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358369   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358377   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358385   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358399   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358408   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358415   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358425   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358436   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358445   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358455   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:56:16.358463   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358474   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358481   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358491   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358500   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358508   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358515   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358530   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.358543   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.358555   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358576   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358584   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.358593   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.358604   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:56:16.358614   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.358621   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.358628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.358635   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358644   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:56:16.358653   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358666   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358685   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358697   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358707   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358716   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358735   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358745   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358755   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358763   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358805   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358818   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358827   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358837   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358847   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358854   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358861   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358867   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.358874   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358893   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358904   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358913   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358921   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.358930   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358942   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358950   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358959   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358970   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358979   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358986   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358992   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359001   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359011   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:56:16.359021   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359029   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359036   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359042   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359052   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:56:16.359060   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359071   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359084   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359106   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359113   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359135   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359144   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:56:16.359154   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.359164   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.359172   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.359182   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.359190   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.359198   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.359206   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.359213   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.359244   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359260   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359275   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359288   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359300   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:56:16.359313   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359328   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359343   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359357   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359372   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359386   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359399   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359410   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359422   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.359435   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.359442   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359452   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359460   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:56:16.359468   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359474   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359481   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359487   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:56:16.359494   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359502   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:56:16.359511   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359521   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359532   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359544   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359553   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359561   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359574   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359590   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359602   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359617   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359630   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359646   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359660   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359676   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359689   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359706   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359719   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359731   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359744   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359763   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359779   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:56:16.359800   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:56:16.359813   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:56:16.359827   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:56:16.359837   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:56:16.359852   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:56:16.359864   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:56:16.359878   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:56:16.359890   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:56:16.359904   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:56:16.359916   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:56:16.359932   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:56:16.359945   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:56:16.359960   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:56:16.359975   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:56:16.359988   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:56:16.360003   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:56:16.360019   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:56:16.360037   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:56:16.360051   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:56:16.360064   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:56:16.360074   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:56:16.360085   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.360093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.360102   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.360113   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.360121   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.360130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.360163   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.360172   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.360189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:56:16.360197   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.360210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360218   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:56:16.360225   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360236   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.360245   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.360255   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.360263   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.360271   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.360280   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.360288   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.360297   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.360308   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.360317   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.360326   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360338   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360353   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360365   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360380   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360392   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360410   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360426   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:56:16.360441   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360454   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360467   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360482   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360494   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360510   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360525   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360538   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360553   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360566   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360582   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360599   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:56:16.360617   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360628   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:56:16.360643   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:56:16.360656   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360671   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360682   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:56:16.360699   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360711   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360726   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360736   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360749   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360762   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.377860   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:56:16.377891   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:56:16.394828   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:56:16.394877   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394896   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394920   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394952   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394976   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:56:16.394988   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.395012   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395045   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395075   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395109   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:56:16.395143   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395179   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:56:16.395221   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:56:16.395242   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395263   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395283   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395324   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:56:16.395347   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.395368   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395390   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.395423   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395451   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395496   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:56:16.395529   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:56:16.395551   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395578   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395597   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395618   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395641   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395678   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395702   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:56:16.395719   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395743   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395770   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395806   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395831   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395859   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395885   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395903   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395929   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.395956   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395981   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396000   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396027   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396052   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396087   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396114   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396138   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396161   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396186   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396267   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396295   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396315   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396339   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396362   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396387   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396413   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396433   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396458   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396484   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396508   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396526   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396550   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396585   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396609   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396634   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396662   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396687   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396711   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396739   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396763   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396786   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396821   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396848   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396872   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396891   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396919   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396943   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396966   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396989   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397016   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397064   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397089   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397114   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397139   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397161   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397187   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397211   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397233   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397256   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397281   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397307   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397330   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397358   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397387   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397424   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397450   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397477   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397500   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397521   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397544   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397571   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397594   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397618   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397644   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397668   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397686   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397742   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397766   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397786   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397818   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397849   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397872   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397897   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397918   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:56:16.397940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.397961   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.397984   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.398006   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398027   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.398047   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.398071   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398100   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398122   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398141   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398162   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398209   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398244   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:56:16.398272   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:56:16.398317   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398350   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:56:16.398371   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398394   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398413   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398456   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.398481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398498   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:56:16.398525   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398557   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.398599   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:56:16.398632   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:56:16.398661   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:56:16.398683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398746   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:56:16.398769   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.398813   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398843   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398873   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398910   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398942   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.398985   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399007   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.399028   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399052   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.399082   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399104   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.399121   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399145   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399170   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399191   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399209   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399231   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399253   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399275   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399295   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:56:16.399309   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399328   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.399366   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399402   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399416   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.399427   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.399440   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:56:16.399454   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399467   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.399491   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399517   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399553   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399565   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:56:16.399576   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.399588   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399598   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399618   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399629   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399640   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.399653   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:56:16.399671   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399684   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399694   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399724   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399741   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399752   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399771   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399782   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399801   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399822   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399834   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399845   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399857   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399866   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399885   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399928   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.400087   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.400109   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.400130   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.400140   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400147   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:56:16.400153   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:56:16.400162   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400169   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:56:16.400175   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400184   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:56:16.400193   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.400201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:56:16.400213   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:56:16.400222   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:56:16.400233   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:56:16.400243   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400253   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:56:16.400262   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:56:16.400272   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:56:16.400281   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:56:16.400693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:56:16.400713   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:56:16.400724   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:56:16.400734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:56:16.400742   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:56:16.400751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:56:16.400760   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:56:16.400768   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:56:16.400780   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:56:16.400812   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:56:16.400833   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:56:16.400853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:56:16.400868   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:56:16.400877   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:56:16.400887   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.400896   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:56:16.400903   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:56:16.400915   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:56:16.400924   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:56:16.400936   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:56:16.400950   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:56:16.400961   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:56:16.400972   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.400985   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:56:16.400993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:56:16.401003   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:56:16.401016   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:56:16.401027   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:56:16.401036   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:56:16.401045   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:56:16.401053   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:56:16.401070   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:56:16.401083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.401100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401132   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:56:16.401141   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:56:16.401150   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:56:16.401160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:56:16.401173   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:56:16.401180   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:56:16.401189   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:56:16.401198   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401209   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401217   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:56:16.401228   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:56:16.401415   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:56:16.401435   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:56:16.401444   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:56:16.401456   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:56:16.401467   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401486   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401529   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.401553   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.401575   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.401589   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.401602   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.401614   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.401628   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401640   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.401653   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.401667   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.401679   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.401693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.401706   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.401720   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.401733   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401745   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.401762   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:56:16.401816   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401840   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401871   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:56:16.401900   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401920   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:56:16.401958   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.401977   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.401987   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402002   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402019   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:56:16.402033   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402048   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:56:16.402065   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402085   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402107   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402134   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402169   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402204   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.402228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402250   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402272   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402294   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402314   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402335   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402349   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402367   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:56:16.402405   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402421   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402433   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402444   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402530   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402557   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402569   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402585   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402600   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402639   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402655   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402666   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402677   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402697   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.402714   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402726   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402737   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402752   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:56:16.402917   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.402934   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402947   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.402959   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.402972   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.402996   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403011   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403026   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403043   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403056   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403070   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403082   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403096   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403110   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403125   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403138   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403152   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403292   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403310   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403325   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403339   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403361   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403376   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403389   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403402   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403417   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403428   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403450   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403464   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.403480   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403495   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403506   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403636   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403671   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403686   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403702   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403720   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403739   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403753   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403767   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403780   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403806   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403820   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403833   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403850   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403871   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403890   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403914   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403936   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403952   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.403976   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403994   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404007   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.404022   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404034   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.404046   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.404066   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.404085   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.404122   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.454878   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:56:16.454917   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:56:16.478085   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.478126   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:56:16.478136   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:56:16.478148   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:56:16.478155   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:56:16.478166   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.478175   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:56:16.478185   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.478194   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:56:16.478203   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.478825   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:56:16.478843   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:56:16.501568   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.501592   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.501601   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.501610   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.501623   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.501630   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.501636   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.501982   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:56:16.501996   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:56:16.524487   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.524514   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.524523   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.524767   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.524788   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.524805   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.524812   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.526406   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:56:16.526437   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:57:16.604286   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:57:16.606268   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.079810784s)
	W1229 06:57:16.606306   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:57:16.606317   17440 logs.go:123] Gathering logs for kube-apiserver [18d0015c724a] ...
	I1229 06:57:16.606331   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d0015c724a"
	I1229 06:57:16.636305   17440 command_runner.go:130] ! Error response from daemon: No such container: 18d0015c724a
	W1229 06:57:16.636367   17440 logs.go:130] failed kube-apiserver [18d0015c724a]: command: /bin/bash -c "docker logs --tail 400 18d0015c724a" /bin/bash -c "docker logs --tail 400 18d0015c724a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 18d0015c724a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 18d0015c724a
	
	** /stderr **
	I1229 06:57:16.636376   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:57:16.636391   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:57:16.657452   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:19.160135   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:57:24.162053   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:57:24.162161   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:57:24.182182   17440 command_runner.go:130] > b206d555ad19
	I1229 06:57:24.183367   17440 logs.go:282] 1 containers: [b206d555ad19]
	I1229 06:57:24.183464   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:57:24.206759   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:57:24.206821   17440 command_runner.go:130] > d81259f64136
	I1229 06:57:24.206853   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:57:24.206926   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:57:24.228856   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:57:24.228897   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:57:24.228968   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:57:24.247867   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:57:24.247890   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:57:24.249034   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:57:24.249130   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:57:24.268209   17440 command_runner.go:130] > 8911777281f4
	I1229 06:57:24.269160   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:57:24.269243   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:57:24.288837   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:57:24.288871   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:57:24.290245   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:57:24.290337   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:57:24.312502   17440 logs.go:282] 0 containers: []
	W1229 06:57:24.312531   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:57:24.312592   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:57:24.334811   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:57:24.334849   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:57:24.334875   17440 logs.go:123] Gathering logs for kube-apiserver [b206d555ad19] ...
	I1229 06:57:24.334888   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b206d555ad19"
	I1229 06:57:24.357541   17440 command_runner.go:130] ! I1229 06:57:22.434262       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:57:24.357567   17440 command_runner.go:130] ! I1229 06:57:22.436951       1 server.go:150] Version: v1.35.0
	I1229 06:57:24.357577   17440 command_runner.go:130] ! I1229 06:57:22.436991       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.357602   17440 command_runner.go:130] ! E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:57:24.359181   17440 logs.go:138] Found kube-apiserver [b206d555ad19] problem: E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:57:24.359206   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:57:24.359218   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:57:24.381077   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:24.381103   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:57:24.381113   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.381121   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:57:24.381131   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.381137   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:57:24.381144   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:57:24.382680   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:57:24.382711   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:57:24.427354   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:57:24.427382   17440 command_runner.go:130] > b206d555ad194       5c6acd67e9cd1       2 seconds ago        Exited              kube-apiserver            5                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:57:24.427400   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427411   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       About a minute ago   Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:57:24.427421   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       3 minutes ago        Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427441   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       3 minutes ago        Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:57:24.427454   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       3 minutes ago        Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:57:24.427465   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       3 minutes ago        Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427477   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       3 minutes ago        Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427488   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:57:24.427509   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       4 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:57:24.430056   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:57:24.430095   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:57:24.453665   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453712   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453738   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453770   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453809   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453838   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453867   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453891   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453911   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453928   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453945   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453961   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453974   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454002   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454022   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454058   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454074   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454087   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454103   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454120   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454135   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454149   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454165   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454179   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454194   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454208   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454224   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454246   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454262   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454276   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454294   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454310   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454326   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454342   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454358   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454371   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454386   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454401   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454423   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454447   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454472   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454500   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454519   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454533   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454549   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454565   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454579   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454593   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454608   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454625   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454640   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454655   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:57:24.454667   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.454680   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.454697   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.454714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.454729   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.454741   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.454816   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454842   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454855   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454870   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454881   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.454896   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454912   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:57:24.454957   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454969   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:57:24.454987   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455012   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:57:24.455025   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.455039   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455081   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455097   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.455110   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:57:24.455125   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455144   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.455165   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:57:24.455186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:57:24.455204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:57:24.455224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455275   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:57:24.455294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455326   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455345   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455366   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455386   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455404   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.455423   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455446   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.455472   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455490   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.455506   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455528   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.455550   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455573   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455588   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455603   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455615   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.455628   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455640   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455657   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455669   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:57:24.455681   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455699   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.455720   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455739   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455750   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.455810   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.455823   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:57:24.455835   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455848   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.455860   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455872   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.455892   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455904   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:57:24.455916   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.455930   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455967   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.455990   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456008   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456019   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.456031   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:57:24.456052   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456067   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.456078   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.456100   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.456114   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456124   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456144   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456159   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456169   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456191   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456205   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456216   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456229   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456239   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456260   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456304   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.456318   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456331   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456352   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456364   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456372   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:57:24.456379   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:57:24.456386   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456396   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:57:24.456406   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456423   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:57:24.456441   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.456458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:57:24.456472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:57:24.456487   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:57:24.456503   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:57:24.456520   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456540   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:57:24.456560   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:57:24.456573   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:57:24.456584   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:57:24.456626   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:57:24.456639   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:57:24.456647   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:57:24.456657   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:57:24.456665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:57:24.456676   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:57:24.456685   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:57:24.456695   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:57:24.456703   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:57:24.456714   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:57:24.456726   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:57:24.456739   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:57:24.456748   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:57:24.456761   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:57:24.456771   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.456782   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:57:24.456790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:57:24.456811   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:57:24.456821   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:57:24.456832   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:57:24.456845   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:57:24.456853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:57:24.456866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456875   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:57:24.456885   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:57:24.456893   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:57:24.456907   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:57:24.456918   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:57:24.456927   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:57:24.456937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:57:24.456947   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:57:24.456959   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:57:24.456971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456990   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457011   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457023   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:57:24.457032   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:57:24.457044   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:57:24.457054   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:57:24.457067   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:57:24.457074   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:57:24.457083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:57:24.457093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457105   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457112   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:57:24.457125   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:57:24.457133   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:57:24.457145   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:57:24.457154   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:57:24.457168   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:57:24.457178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457192   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457205   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457220   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.457235   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.457247   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.457258   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.457271   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.457284   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.457299   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457310   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.457322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.457333   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.457345   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.457359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.457370   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.457381   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.457396   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457436   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:57:24.457460   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457481   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457500   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:57:24.457515   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457533   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:57:24.457586   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.457604   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.457613   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457633   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457649   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:57:24.457664   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457680   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:57:24.457697   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.457717   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457740   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457763   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457785   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457817   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.457904   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457927   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457976   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457996   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458019   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458034   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458050   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:57:24.458090   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458106   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458116   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458130   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458141   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458158   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458170   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458184   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458198   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458263   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458295   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458316   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458339   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458367   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.458389   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.458409   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458429   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458447   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:57:24.458468   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.458490   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458512   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458529   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458542   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458572   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458587   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458602   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458617   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458632   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458644   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458659   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458674   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458686   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458702   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458717   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458732   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458746   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458762   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458777   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458790   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458824   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458839   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458852   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458865   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458879   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458889   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458911   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458925   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458939   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458952   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458964   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458983   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458998   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459016   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459031   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459048   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459062   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.459090   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459104   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459118   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459132   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459145   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459158   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459174   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459186   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459201   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459215   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459225   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459247   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459261   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459274   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459286   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459302   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459314   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459334   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459352   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459392   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.459418   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.459438   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.459461   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459483   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459502   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459515   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459537   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459552   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459567   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459579   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459599   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459613   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459634   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459649   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459662   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459676   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459693   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459707   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459720   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459741   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459753   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459769   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459782   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459806   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459829   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459845   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459859   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459875   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459897   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459911   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459924   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459938   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459950   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459972   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459987   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460000   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460016   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460036   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460053   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.460094   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.460108   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460124   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.460136   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.460148   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460162   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.460182   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460195   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460206   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460221   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460236   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.460254   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460269   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460283   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460296   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460308   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460321   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460334   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460347   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460367   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460381   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460395   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460414   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460450   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460499   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.513870   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:57:24.513913   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:57:24.542868   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.542904   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:57:24.542974   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:57:24.542992   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:57:24.543020   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.543037   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:57:24.543067   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543085   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:57:24.543199   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:57:24.543237   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:57:24.543258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:57:24.543276   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:57:24.543291   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:57:24.543306   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:57:24.543327   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:57:24.543344   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:57:24.543365   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:57:24.543380   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:57:24.543393   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:57:24.543419   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:57:24.543437   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:57:24.543464   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:57:24.543483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:57:24.543499   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:57:24.543511   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:57:24.543561   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:57:24.543585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:57:24.543605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:57:24.543623   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:57:24.543659   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:57:24.543680   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:57:24.543701   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:57:24.543722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.543744   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:57:24.543770   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543821   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:57:24.543840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:57:24.543865   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:57:24.543886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:57:24.543908   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:57:24.543927   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:57:24.543945   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.543962   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:57:24.543980   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:57:24.544010   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:57:24.544031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:57:24.544065   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:57:24.544084   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:57:24.544103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:57:24.544120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:57:24.544157   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544176   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544193   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:57:24.544213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544224   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:57:24.544264   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544298   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:57:24.544314   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:57:24.544331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544345   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:57:24.544364   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:57:24.544381   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:57:24.544405   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.544430   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544465   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544517   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544537   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.544554   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:57:24.544575   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544595   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544623   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544641   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:57:24.544662   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544683   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544711   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544730   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544767   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544828   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.552509   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:57:24.552540   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:57:24.575005   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:57:24.575036   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:57:24.597505   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597545   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597560   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597577   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597596   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.597610   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:57:24.597628   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:57:24.597642   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597654   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.597667   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:57:24.597682   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.597705   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.597733   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.597753   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.597765   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.597773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:57:24.597803   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.597814   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:57:24.597825   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.597834   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.597841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:57:24.597848   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597856   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.597866   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.597874   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.597883   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.597900   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.597909   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.597916   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597925   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597936   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597944   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597953   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:57:24.597960   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.597973   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.597981   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.597991   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.597999   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598010   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598017   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598029   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598041   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598054   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598067   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598074   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598084   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598095   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:57:24.598104   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598111   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598117   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598126   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598132   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:57:24.598141   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598154   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598174   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598186   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598196   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598205   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598224   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598235   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598246   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598256   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598264   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598273   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598289   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598297   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598306   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598314   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598320   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.598327   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598334   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598345   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.598354   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.598365   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.598373   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.598381   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.598389   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.598400   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.598415   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.598431   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598447   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598463   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598476   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598492   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598503   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:57:24.598513   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598522   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598531   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598538   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598545   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:57:24.598555   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598578   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598591   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598602   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598613   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598621   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598642   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598653   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598664   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598674   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598683   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598693   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598701   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598716   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598724   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598732   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598760   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598774   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598787   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598815   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598832   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:57:24.598845   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598860   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598873   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598889   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598904   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598918   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598933   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598946   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598958   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598973   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598980   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598989   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598999   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:57:24.599008   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.599015   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.599022   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.599030   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:57:24.599036   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.599043   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:57:24.599054   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.599065   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.599077   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.599088   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.599099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.599107   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:57:24.599120   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599138   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599151   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599168   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599185   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599198   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599228   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599257   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599270   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599285   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599297   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599319   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.599331   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:57:24.599346   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:57:24.599359   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:57:24.599376   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:57:24.599387   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:57:24.599405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:57:24.599423   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:57:24.599452   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:57:24.599472   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:57:24.599489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:57:24.599503   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:57:24.599517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:57:24.599529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:57:24.599544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:57:24.599559   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:57:24.599572   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:57:24.599587   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:57:24.599602   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:57:24.599615   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:57:24.599631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:57:24.599644   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:57:24.599654   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:57:24.599664   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.599673   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.599682   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.599692   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.599700   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.599710   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.599747   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.599756   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.599772   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:57:24.599782   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599789   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.599806   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599814   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:57:24.599822   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599830   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.599841   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.599849   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.599860   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.599868   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.599879   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.599886   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.599896   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.599907   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.599914   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.599922   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599934   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599953   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599970   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599983   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600000   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600017   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600034   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:57:24.600049   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600063   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600079   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600092   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600107   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600121   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600137   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600152   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600164   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600177   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600190   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600207   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:57:24.600223   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600235   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:57:24.600247   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:57:24.600261   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600276   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600288   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:57:24.600304   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600317   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600331   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600345   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600357   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600373   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600386   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600403   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.600423   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:57:24.600448   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600472   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600490   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.619075   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:57:24.619123   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:58:24.700496   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:58:24.700542   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.081407425s)
	W1229 06:58:24.700578   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:58:24.700591   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:58:24.700607   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:58:24.726206   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726238   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:58:24.726283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:58:24.726296   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:58:24.726311   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726321   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:58:24.726342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726358   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:58:24.726438   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:58:24.726461   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:58:24.726472   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:58:24.726483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:58:24.726492   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:58:24.726503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:58:24.726517   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:58:24.726528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:58:24.726540   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:58:24.726552   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:58:24.726560   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:58:24.726577   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:58:24.726590   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:58:24.726607   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:58:24.726618   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:58:24.726629   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:58:24.726636   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:58:24.726647   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:58:24.726657   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:58:24.726670   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:58:24.726680   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:58:24.726698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:58:24.726711   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:58:24.726723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:58:24.726735   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:58:24.726750   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:58:24.726765   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:58:24.726784   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726826   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:58:24.726839   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:58:24.726848   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:58:24.726858   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:58:24.726870   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:58:24.726883   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:58:24.726896   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:58:24.726906   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:58:24.726922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:58:24.726935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:58:24.726947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:58:24.726956   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:58:24.726969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:58:24.726982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.726997   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:58:24.727009   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727020   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.727029   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:58:24.727039   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727056   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:58:24.727064   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:58:24.727089   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:58:24.727100   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727109   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:58:24.727132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:58:24.733042   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:58:24.733064   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:58:24.755028   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.755231   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:58:24.755256   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:58:24.776073   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:58:24.776109   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.776120   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:58:24.776135   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:58:24.776154   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:58:24.776162   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:58:24.776180   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:58:24.776188   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:58:24.776195   17440 command_runner.go:130] !  >
	I1229 06:58:24.776212   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:58:24.776224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:58:24.776249   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:58:24.776257   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:58:24.776266   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.776282   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:58:24.776296   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:58:24.776307   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:58:24.776328   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:58:24.776350   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:58:24.776366   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:58:24.776376   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:58:24.776388   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:58:24.776404   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:58:24.776420   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:58:24.776439   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:58:24.776453   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:58:24.778558   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:24.778595   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:24.793983   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:24.794025   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:24.794040   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:24.794054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:24.794069   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:24.794079   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:24.794096   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:24.794106   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:24.794117   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:24.794125   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:24.794136   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:24.794146   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:24.794160   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794167   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:24.794178   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:24.794186   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794196   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794207   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794215   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:24.794221   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:24.794229   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:24.794241   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:24.794252   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:24.794260   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.794271   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.795355   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:58:24.795387   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:58:24.820602   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.820635   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:58:24.820646   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:58:24.820657   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:58:24.820665   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:58:24.820672   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.820681   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:58:24.820692   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.820698   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:58:24.820705   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.822450   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:58:24.822473   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:58:24.844122   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.844156   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:58:24.844170   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.844184   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:58:24.844201   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:24.844210   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:58:24.844218   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.845429   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:58:24.845453   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:58:24.867566   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:58:24.867597   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:58:24.867607   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:58:24.867615   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867622   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867633   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:58:24.867653   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867681   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:58:24.867694   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867704   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867719   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867734   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867750   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867763   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867817   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867836   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867848   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867859   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867871   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867883   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867891   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867901   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867914   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867926   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867944   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867956   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867972   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867982   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867997   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868013   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868028   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868048   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868063   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868071   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868081   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868098   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868111   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868127   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868140   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868153   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868164   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868177   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868192   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868207   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868221   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868236   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868247   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868258   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868275   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868290   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868304   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868320   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868332   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868342   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868358   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868373   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868385   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868400   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868414   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868425   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868438   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.872821   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872842   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:58:24.872901   17440 out.go:285] X Problems detected in kube-apiserver [b206d555ad19]:
	W1229 06:58:24.872915   17440 out.go:285]   E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:58:24.872919   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872923   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:58:34.875381   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:58:39.877679   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:39.877779   17440 kubeadm.go:602] duration metric: took 4m48.388076341s to restartPrimaryControlPlane
	W1229 06:58:39.877879   17440 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1229 06:58:39.877946   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:39.930050   17440 command_runner.go:130] ! W1229 06:58:39.921577    8187 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:49.935089   17440 command_runner.go:130] ! W1229 06:58:49.926653    8187 reset.go:141] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:49.935131   17440 command_runner.go:130] ! W1229 06:58:49.926754    8187 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:50.998307   17440 command_runner.go:130] > [reset] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I1229 06:58:50.998341   17440 command_runner.go:130] > [reset] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
	I1229 06:58:50.998348   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:50.998357   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/minikube/etcd
	I1229 06:58:50.998366   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:50.998372   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:50.998386   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:50.998407   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:50.998417   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:50.998428   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:50.998434   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:50.998442   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:50.998458   17440 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (11.120499642s)
	I1229 06:58:50.998527   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.015635   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:58:51.028198   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.040741   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.040780   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.040811   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.040826   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040865   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040877   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.040925   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.051673   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052090   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052155   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.064755   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.076455   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076517   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076577   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.088881   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.099253   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099652   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099710   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.111487   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.122532   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122905   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122972   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.135143   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.355420   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355450   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355543   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.355556   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.355615   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355625   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355790   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.355837   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.356251   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356265   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356317   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.356324   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.357454   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357471   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357544   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.357561   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	W1229 06:58:51.357680   17440 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 06:58:51.357753   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:51.401004   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.401036   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I1229 06:58:51.401047   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:51.408535   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:51.413813   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:51.415092   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:51.415117   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:51.415128   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:51.415137   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:51.415145   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:51.415645   17440 command_runner.go:130] ! W1229 06:58:51.391426    8625 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:51.415670   17440 command_runner.go:130] ! W1229 06:58:51.392518    8625 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:51.415739   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.432316   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.444836   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.444860   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.444867   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.444874   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445417   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445435   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.445485   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.457038   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457099   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457146   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.469980   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.480965   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481435   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481498   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.493408   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.504342   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504404   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504468   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.516567   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.526975   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527475   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527532   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.539365   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.587038   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587068   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587108   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.587113   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.738880   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738912   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738963   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.738975   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.739029   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739038   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739157   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739166   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739271   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739294   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739348   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739355   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739406   17440 kubeadm.go:403] duration metric: took 5m0.289116828s to StartCluster
	I1229 06:58:51.739455   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 06:58:51.739507   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 06:58:51.776396   17440 cri.go:96] found id: ""
	I1229 06:58:51.776420   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.776428   17440 logs.go:284] No container was found matching "kube-apiserver"
	I1229 06:58:51.776434   17440 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 06:58:51.776522   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 06:58:51.808533   17440 cri.go:96] found id: ""
	I1229 06:58:51.808556   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.808563   17440 logs.go:284] No container was found matching "etcd"
	I1229 06:58:51.808570   17440 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 06:58:51.808625   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 06:58:51.841860   17440 cri.go:96] found id: ""
	I1229 06:58:51.841887   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.841894   17440 logs.go:284] No container was found matching "coredns"
	I1229 06:58:51.841900   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 06:58:51.841955   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 06:58:51.875485   17440 cri.go:96] found id: ""
	I1229 06:58:51.875512   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.875520   17440 logs.go:284] No container was found matching "kube-scheduler"
	I1229 06:58:51.875526   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 06:58:51.875576   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 06:58:51.909661   17440 cri.go:96] found id: ""
	I1229 06:58:51.909699   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.909712   17440 logs.go:284] No container was found matching "kube-proxy"
	I1229 06:58:51.909720   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 06:58:51.909790   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 06:58:51.943557   17440 cri.go:96] found id: ""
	I1229 06:58:51.943594   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.943607   17440 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 06:58:51.943616   17440 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 06:58:51.943685   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 06:58:51.979189   17440 cri.go:96] found id: ""
	I1229 06:58:51.979219   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.979228   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:58:51.979234   17440 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 06:58:51.979285   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 06:58:52.013436   17440 cri.go:96] found id: ""
	I1229 06:58:52.013472   17440 logs.go:282] 0 containers: []
	W1229 06:58:52.013482   17440 logs.go:284] No container was found matching "storage-provisioner"
	I1229 06:58:52.013494   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:58:52.013507   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:58:52.030384   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.030429   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.030454   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030506   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030530   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030550   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030574   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030601   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030643   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:58:52.030670   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030694   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:58:52.030721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.030757   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:58:52.030787   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030826   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030853   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030893   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.030921   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.030943   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:58:52.030981   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.031015   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.031053   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:58:52.031087   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:58:52.031117   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:58:52.031146   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031223   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:58:52.031253   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031281   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031311   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031347   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031383   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031422   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031445   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.031467   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031491   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.031516   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031538   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.031562   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031584   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.031606   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031628   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031651   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031673   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031695   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.031717   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031763   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031786   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:58:52.031824   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031855   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.031894   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031949   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.031981   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.032005   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.032025   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:58:52.032048   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032069   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.032093   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032112   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032150   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032170   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:58:52.032192   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.032214   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032234   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032269   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032290   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032314   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.032335   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:58:52.032371   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032395   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032414   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032452   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032473   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032495   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032530   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032552   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032573   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032608   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032631   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032655   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032676   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032696   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032735   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032819   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.032845   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032864   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032899   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032919   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.032935   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.032948   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.032960   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.032981   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:58:52.032995   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.033012   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:58:52.033029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:52.033042   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:58:52.033062   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:58:52.033080   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:58:52.033101   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:58:52.033120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.033138   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:58:52.033166   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:58:52.033187   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:58:52.033206   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:58:52.033274   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:58:52.033294   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:58:52.033309   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:58:52.033326   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:58:52.033343   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:58:52.033359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:58:52.033378   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:58:52.033398   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:58:52.033413   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:58:52.033431   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:58:52.033453   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:58:52.033476   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:58:52.033492   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:58:52.033507   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:58:52.033526   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033542   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:58:52.033559   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:58:52.033609   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:58:52.033625   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:58:52.033642   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:58:52.033665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:58:52.033681   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:58:52.033700   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033718   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:58:52.033734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:58:52.033751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:58:52.033776   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:58:52.033808   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:58:52.033826   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:58:52.033840   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:58:52.033855   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:58:52.033878   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:58:52.033905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033974   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:58:52.034010   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:58:52.034030   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:58:52.034050   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:58:52.034084   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:58:52.034099   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:58:52.034116   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:58:52.034134   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034152   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034167   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:58:52.034186   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:58:52.034203   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:58:52.034224   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:58:52.034241   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:58:52.034265   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:58:52.034286   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034332   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034358   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.034380   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.034404   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.034427   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.034450   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.034472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.034499   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.034544   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.034566   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.034588   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.034611   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.034633   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.034655   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:58:52.034678   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034697   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.034724   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:58:52.034749   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034771   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034819   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:58:52.034843   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034873   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:58:52.034936   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.034963   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.034993   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035018   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035049   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:58:52.035071   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035099   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:58:52.035126   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.035159   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035194   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035263   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035299   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.035333   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035368   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035408   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035445   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035477   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035512   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035534   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035563   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:58:52.035631   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035658   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035677   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035699   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035720   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035749   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035771   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035814   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035838   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035902   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035927   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035947   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035978   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036010   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.036038   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.036061   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036082   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036102   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:58:52.036121   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.036141   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036165   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036190   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036212   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036251   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036275   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036299   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036323   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036345   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036369   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036393   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036418   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036441   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036464   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036488   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036511   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036536   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036561   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036584   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036606   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036642   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036664   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036687   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036711   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036734   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036754   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036806   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036895   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036922   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036945   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036973   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037009   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037032   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037052   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037098   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037122   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037144   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.037168   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037189   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037212   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037235   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037254   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037278   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037303   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037325   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037348   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037372   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037392   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037424   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037449   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037472   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037497   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037518   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037539   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037574   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037604   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.037669   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.037694   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.037713   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.037734   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.037760   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037784   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037816   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037851   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037875   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037897   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037917   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037950   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037981   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038011   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038035   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038059   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038079   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038102   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038125   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038147   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038182   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038203   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038223   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038243   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038260   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038297   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038321   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038344   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038365   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038401   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038423   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038449   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038471   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038491   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038526   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038549   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038585   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038608   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038643   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038670   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.038735   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.038758   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038785   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.038817   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.038842   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038869   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.038900   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038922   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038946   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038977   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039003   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.039034   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039059   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039082   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039102   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039126   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039149   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039171   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039191   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039227   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039252   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039275   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039295   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039330   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039396   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039419   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.302837    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039444   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.341968    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039468   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342033    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039488   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: I1229 06:57:30.342050    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039523   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342233    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039550   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.608375    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.039576   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186377    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039598   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186459    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.039675   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188187    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039700   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188267    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.039715   17440 command_runner.go:130] > Dec 29 06:57:37 functional-695625 kubelet[6517]: I1229 06:57:37.010219    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.039749   17440 command_runner.go:130] > Dec 29 06:57:38 functional-695625 kubelet[6517]: E1229 06:57:38.741770    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039773   17440 command_runner.go:130] > Dec 29 06:57:40 functional-695625 kubelet[6517]: E1229 06:57:40.303258    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039808   17440 command_runner.go:130] > Dec 29 06:57:50 functional-695625 kubelet[6517]: E1229 06:57:50.304120    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039837   17440 command_runner.go:130] > Dec 29 06:57:55 functional-695625 kubelet[6517]: E1229 06:57:55.743031    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.039903   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 kubelet[6517]: E1229 06:57:58.103052    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.272240811 +0000 UTC m=+0.287990191,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039929   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.304627    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039954   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432518    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.039991   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432667    6517 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)
	I1229 06:58:52.040014   17440 command_runner.go:130] > Dec 29 06:58:10 functional-695625 kubelet[6517]: E1229 06:58:10.305485    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040037   17440 command_runner.go:130] > Dec 29 06:58:11 functional-695625 kubelet[6517]: E1229 06:58:11.012407    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.040068   17440 command_runner.go:130] > Dec 29 06:58:12 functional-695625 kubelet[6517]: E1229 06:58:12.743824    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040086   17440 command_runner.go:130] > Dec 29 06:58:18 functional-695625 kubelet[6517]: I1229 06:58:18.014210    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.040107   17440 command_runner.go:130] > Dec 29 06:58:20 functional-695625 kubelet[6517]: E1229 06:58:20.306630    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040127   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186554    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040149   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186719    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.040176   17440 command_runner.go:130] > Dec 29 06:58:29 functional-695625 kubelet[6517]: E1229 06:58:29.745697    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040195   17440 command_runner.go:130] > Dec 29 06:58:30 functional-695625 kubelet[6517]: E1229 06:58:30.307319    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040256   17440 command_runner.go:130] > Dec 29 06:58:32 functional-695625 kubelet[6517]: E1229 06:58:32.105206    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.286010652 +0000 UTC m=+0.301760032,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.040279   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184790    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040300   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184918    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040319   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: I1229 06:58:39.184949    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040354   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040377   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040397   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.040413   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040433   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040455   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040477   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040498   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040520   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040538   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040576   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040596   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040619   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040640   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040658   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040692   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040711   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040729   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040741   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040764   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040784   17440 command_runner.go:130] > Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040807   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.040815   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.040821   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.040830   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	I1229 06:58:52.093067   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:52.093106   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:52.108863   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:52.108898   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:52.108912   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:52.108925   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:52.108937   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:52.108945   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:52.108951   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:52.108957   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:52.108962   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:52.108971   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:52.108975   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:52.108980   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:52.108992   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.108997   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:52.109006   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:52.109011   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.109021   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109031   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109036   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:52.109043   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:52.109048   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:52.109055   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:52.109062   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:52.109067   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109072   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109080   17440 command_runner.go:130] > [Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109088   17440 command_runner.go:130] > [  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109931   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:58:52.109946   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:59:52.193646   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:59:52.193695   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.083736259s)
	W1229 06:59:52.193730   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:59:52.193743   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:59:52.193757   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:59:52.211424   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211464   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.211503   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.211519   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.211538   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:59:52.211555   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:59:52.211569   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:59:52.211587   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211601   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.211612   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:59:52.211630   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.211652   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.211672   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.211696   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.211714   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.211730   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:59:52.211773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.211790   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:59:52.211824   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.211841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.211855   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:59:52.211871   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211884   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.211899   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.211913   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.211926   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.211948   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.211959   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.211970   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211984   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212011   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212025   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212039   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:59:52.212064   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212079   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212093   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212108   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212125   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212139   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212172   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.212192   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.212215   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212237   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212252   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212266   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212285   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:59:52.212301   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.212316   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.212331   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.212341   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.212357   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:59:52.212372   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.212392   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.212423   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.212444   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.212461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.212477   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:59:52.212512   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.212529   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:59:52.212547   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.212562   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.212577   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.212594   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.212612   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.212628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.212643   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.212656   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.212671   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212684   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.212699   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212714   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212732   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212751   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212767   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212783   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.212808   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212827   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212844   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212864   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212899   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212916   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212932   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212949   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212974   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:59:52.212995   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213006   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213020   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213033   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213055   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:59:52.213073   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213115   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213135   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213153   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213169   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213204   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.213221   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:59:52.213242   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.213258   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.213275   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.213291   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.213308   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.213321   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.213334   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.213348   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.213387   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213414   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213440   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213465   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213486   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:59:52.213507   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213528   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213549   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213573   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213595   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213616   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213637   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213655   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213675   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.213697   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.213709   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.213724   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.213735   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:59:52.213749   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213759   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213774   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213786   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:59:52.213809   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213822   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:59:52.213839   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213856   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213874   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213891   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213907   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213920   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213942   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213963   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213985   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214006   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214028   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214055   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214078   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214122   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214144   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214166   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214190   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214211   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214242   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.214258   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:59:52.214283   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:59:52.214298   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:59:52.214323   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:59:52.214341   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:59:52.214365   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:59:52.214380   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:59:52.214405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:59:52.214421   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:59:52.214447   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:59:52.214464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:59:52.214489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:59:52.214506   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:59:52.214531   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:59:52.214553   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:59:52.214576   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:59:52.214600   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:59:52.214623   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:59:52.214646   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:59:52.214668   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:59:52.214690   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:59:52.214703   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:59:52.214721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.214735   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.214748   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.214762   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.214775   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.214788   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.215123   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.215148   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.215180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:59:52.215194   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.215222   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215233   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:59:52.215247   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215265   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.215283   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.215299   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.215312   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.215324   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.215340   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.215355   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.215372   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.215389   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.215401   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.215409   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215430   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215454   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215478   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215500   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215517   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215532   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215549   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:59:52.215565   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215578   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215593   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215606   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215622   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215643   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215667   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215688   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215712   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215762   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215839   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:59:52.215868   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215888   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:59:52.215912   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:59:52.215937   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215959   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215979   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:59:52.216007   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216027   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216051   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216067   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216084   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216097   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216112   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216128   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216141   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216157   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216171   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216195   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216222   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216243   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216263   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216276   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216289   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 dockerd[4014]: time="2025-12-29T06:58:43.458641345Z" level=info msg="ignoring event" container=07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216304   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.011072219Z" level=info msg="ignoring event" container=173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216318   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.102126666Z" level=info msg="ignoring event" container=6b7711ee25a2df71f8c7d296f7186875ebd6ab978a71d33f177de0cc3055645b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216331   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.266578298Z" level=info msg="ignoring event" container=a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216346   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.365376654Z" level=info msg="ignoring event" container=fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216365   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.452640794Z" level=info msg="ignoring event" container=4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216380   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.557330204Z" level=info msg="ignoring event" container=d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216392   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.666151542Z" level=info msg="ignoring event" container=0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216409   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.751481082Z" level=info msg="ignoring event" container=f48fc04e347519b276e239ee9a6b0b8e093862313e46174a1815efae670eec9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216427   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	I1229 06:59:52.216440   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	I1229 06:59:52.216455   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216467   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216484   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	I1229 06:59:52.216495   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	I1229 06:59:52.216512   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	I1229 06:59:52.216525   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	I1229 06:59:52.216542   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:59:52.216554   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	I1229 06:59:52.216568   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:59:52.216582   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	I1229 06:59:52.216596   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216611   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216628   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	I1229 06:59:52.216642   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	I1229 06:59:52.216660   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:59:52.216673   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	I1229 06:59:52.238629   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:59:52.238668   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:59:52.287732   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	W1229 06:59:52.290016   17440 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 06:59:52.290080   17440 out.go:285] * 
	W1229 06:59:52.290145   17440 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.290156   17440 out.go:285] * 
	W1229 06:59:52.290452   17440 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:59:52.293734   17440 out.go:203] 
	W1229 06:59:52.295449   17440 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.295482   17440 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 06:59:52.295500   17440 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 06:59:52.296904   17440 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 07:00:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:00:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 07:01:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:01:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> kernel <==
	 07:03:43 up 11 min,  0 users,  load average: 0.00, 0.13, 0.13
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.813295776s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (153.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (153.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 kubectl -- --context functional-695625 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 kubectl -- --context functional-695625 get pods: exit status 1 (1m0.122923684s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-695625 kubectl -- --context functional-695625 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.886594904s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m1.094427125s)
helpers_test.go:261: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                       ARGS                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:52 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ delete  │ -p nospam-039815                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ start   │ -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:53 UTC │
	│ start   │ -p functional-695625 --alsologtostderr -v=8                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:53 UTC │                     │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.1                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:03 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.3                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:latest                          │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add minikube-local-cache-test:functional-695625           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache delete minikube-local-cache-test:functional-695625        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                  │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ list                                                                              │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl images                                          │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo docker rmi registry.k8s.io/pause:latest                │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	│ cache   │ functional-695625 cache reload                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                  │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                               │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ kubectl │ functional-695625 kubectl -- --context functional-695625 get pods                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:53:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:53:22.250786   17440 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:53:22.251073   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251082   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:53:22.251087   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251322   17440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 06:53:22.251807   17440 out.go:368] Setting JSON to false
	I1229 06:53:22.252599   17440 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2152,"bootTime":1766989050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:53:22.252669   17440 start.go:143] virtualization: kvm guest
	I1229 06:53:22.254996   17440 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:53:22.256543   17440 notify.go:221] Checking for updates...
	I1229 06:53:22.256551   17440 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:53:22.258115   17440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:53:22.259464   17440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:22.260823   17440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:53:22.262461   17440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:53:22.263830   17440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:53:22.265499   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.265604   17440 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:53:22.301877   17440 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 06:53:22.303062   17440 start.go:309] selected driver: kvm2
	I1229 06:53:22.303099   17440 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.303255   17440 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:53:22.304469   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:22.304541   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:22.304607   17440 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.304716   17440 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:53:22.306617   17440 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 06:53:22.307989   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:22.308028   17440 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 06:53:22.308037   17440 cache.go:65] Caching tarball of preloaded images
	I1229 06:53:22.308172   17440 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 06:53:22.308185   17440 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 06:53:22.308288   17440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 06:53:22.308499   17440 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 06:53:22.308543   17440 start.go:364] duration metric: took 25.28µs to acquireMachinesLock for "functional-695625"
	I1229 06:53:22.308555   17440 start.go:96] Skipping create...Using existing machine configuration
	I1229 06:53:22.308560   17440 fix.go:54] fixHost starting: 
	I1229 06:53:22.310738   17440 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 06:53:22.310765   17440 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 06:53:22.313927   17440 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 06:53:22.313960   17440 machine.go:94] provisionDockerMachine start ...
	I1229 06:53:22.317184   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317690   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.317748   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317941   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.318146   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.318156   17440 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 06:53:22.424049   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.424102   17440 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 06:53:22.427148   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427685   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.427715   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427957   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.428261   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.428280   17440 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 06:53:22.552563   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.555422   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.555807   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.555834   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.556061   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.556278   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.556302   17440 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 06:53:22.661438   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:22.661470   17440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 06:53:22.661505   17440 buildroot.go:174] setting up certificates
	I1229 06:53:22.661529   17440 provision.go:84] configureAuth start
	I1229 06:53:22.664985   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.665439   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.665459   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.667758   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668124   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.668145   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668257   17440 provision.go:143] copyHostCerts
	I1229 06:53:22.668280   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668308   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 06:53:22.668317   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668383   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 06:53:22.668476   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668505   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 06:53:22.668512   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668541   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 06:53:22.668582   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668598   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 06:53:22.668603   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668632   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 06:53:22.668676   17440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 06:53:22.746489   17440 provision.go:177] copyRemoteCerts
	I1229 06:53:22.746545   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 06:53:22.749128   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749596   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.749616   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749757   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:22.836885   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 06:53:22.836959   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 06:53:22.872390   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 06:53:22.872481   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 06:53:22.908829   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 06:53:22.908896   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 06:53:22.941014   17440 provision.go:87] duration metric: took 279.457536ms to configureAuth
	I1229 06:53:22.941053   17440 buildroot.go:189] setting minikube options for container-runtime
	I1229 06:53:22.941277   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.944375   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.944857   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.944916   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.945128   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.945387   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.945402   17440 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 06:53:23.052106   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 06:53:23.052136   17440 buildroot.go:70] root file system type: tmpfs
	I1229 06:53:23.052304   17440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 06:53:23.055887   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056416   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.056446   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056629   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.056893   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.056961   17440 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 06:53:23.183096   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 06:53:23.186465   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.186943   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.187006   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.187227   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.187475   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.187494   17440 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 06:53:23.306011   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:23.306077   17440 machine.go:97] duration metric: took 992.109676ms to provisionDockerMachine
	I1229 06:53:23.306099   17440 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 06:53:23.306114   17440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 06:53:23.306201   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 06:53:23.309537   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.309944   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.309967   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.310122   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.393657   17440 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 06:53:23.398689   17440 command_runner.go:130] > NAME=Buildroot
	I1229 06:53:23.398723   17440 command_runner.go:130] > VERSION=2025.02
	I1229 06:53:23.398731   17440 command_runner.go:130] > ID=buildroot
	I1229 06:53:23.398737   17440 command_runner.go:130] > VERSION_ID=2025.02
	I1229 06:53:23.398745   17440 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1229 06:53:23.398791   17440 info.go:137] Remote host: Buildroot 2025.02
	I1229 06:53:23.398821   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 06:53:23.398897   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 06:53:23.398981   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 06:53:23.398993   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 06:53:23.399068   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 06:53:23.399075   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> /etc/test/nested/copy/13486/hosts
	I1229 06:53:23.399114   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 06:53:23.412045   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:23.445238   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 06:53:23.479048   17440 start.go:296] duration metric: took 172.930561ms for postStartSetup
	I1229 06:53:23.479099   17440 fix.go:56] duration metric: took 1.170538464s for fixHost
	I1229 06:53:23.482307   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.482761   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.482808   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.483049   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.483313   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.483327   17440 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 06:53:23.586553   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766991203.580410695
	
	I1229 06:53:23.586572   17440 fix.go:216] guest clock: 1766991203.580410695
	I1229 06:53:23.586579   17440 fix.go:229] Guest: 2025-12-29 06:53:23.580410695 +0000 UTC Remote: 2025-12-29 06:53:23.479103806 +0000 UTC m=+1.278853461 (delta=101.306889ms)
	I1229 06:53:23.586594   17440 fix.go:200] guest clock delta is within tolerance: 101.306889ms
	I1229 06:53:23.586598   17440 start.go:83] releasing machines lock for "functional-695625", held for 1.278049275s
	I1229 06:53:23.590004   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.590438   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.590463   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.591074   17440 ssh_runner.go:195] Run: cat /version.json
	I1229 06:53:23.591186   17440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 06:53:23.594362   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594454   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594831   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.594868   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594954   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.595021   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.595083   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.595278   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.692873   17440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1229 06:53:23.692948   17440 command_runner.go:130] > {"iso_version": "v1.37.0-1766979747-22353", "kicbase_version": "v0.0.48-1766884053-22351", "minikube_version": "v1.37.0", "commit": "f5189b2bdbb6990e595e25e06a017f8901d29fa8"}
	I1229 06:53:23.693063   17440 ssh_runner.go:195] Run: systemctl --version
	I1229 06:53:23.700357   17440 command_runner.go:130] > systemd 256 (256.7)
	I1229 06:53:23.700393   17440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1229 06:53:23.700501   17440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 06:53:23.707230   17440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1229 06:53:23.707369   17440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 06:53:23.707433   17440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 06:53:23.719189   17440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 06:53:23.719220   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:23.719246   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:23.719351   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:23.744860   17440 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1229 06:53:23.744940   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 06:53:23.758548   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 06:53:23.773051   17440 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 06:53:23.773122   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 06:53:23.786753   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.800393   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 06:53:23.813395   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.826600   17440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 06:53:23.840992   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 06:53:23.854488   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 06:53:23.869084   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 06:53:23.882690   17440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 06:53:23.894430   17440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1229 06:53:23.894542   17440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 06:53:23.912444   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:24.139583   17440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 06:53:24.191402   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:24.191457   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:24.191521   17440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 06:53:24.217581   17440 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1229 06:53:24.217604   17440 command_runner.go:130] > [Unit]
	I1229 06:53:24.217609   17440 command_runner.go:130] > Description=Docker Application Container Engine
	I1229 06:53:24.217615   17440 command_runner.go:130] > Documentation=https://docs.docker.com
	I1229 06:53:24.217626   17440 command_runner.go:130] > After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1229 06:53:24.217631   17440 command_runner.go:130] > Wants=network-online.target containerd.service
	I1229 06:53:24.217635   17440 command_runner.go:130] > Requires=docker.socket
	I1229 06:53:24.217638   17440 command_runner.go:130] > StartLimitBurst=3
	I1229 06:53:24.217642   17440 command_runner.go:130] > StartLimitIntervalSec=60
	I1229 06:53:24.217646   17440 command_runner.go:130] > [Service]
	I1229 06:53:24.217649   17440 command_runner.go:130] > Type=notify
	I1229 06:53:24.217653   17440 command_runner.go:130] > Restart=always
	I1229 06:53:24.217660   17440 command_runner.go:130] > ExecStart=
	I1229 06:53:24.217694   17440 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1229 06:53:24.217710   17440 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1229 06:53:24.217748   17440 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1229 06:53:24.217761   17440 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1229 06:53:24.217767   17440 command_runner.go:130] > LimitNOFILE=infinity
	I1229 06:53:24.217782   17440 command_runner.go:130] > LimitNPROC=infinity
	I1229 06:53:24.217790   17440 command_runner.go:130] > LimitCORE=infinity
	I1229 06:53:24.217818   17440 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1229 06:53:24.217828   17440 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1229 06:53:24.217833   17440 command_runner.go:130] > TasksMax=infinity
	I1229 06:53:24.217840   17440 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1229 06:53:24.217847   17440 command_runner.go:130] > Delegate=yes
	I1229 06:53:24.217855   17440 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1229 06:53:24.217864   17440 command_runner.go:130] > KillMode=process
	I1229 06:53:24.217871   17440 command_runner.go:130] > OOMScoreAdjust=-500
	I1229 06:53:24.217881   17440 command_runner.go:130] > [Install]
	I1229 06:53:24.217896   17440 command_runner.go:130] > WantedBy=multi-user.target
	I1229 06:53:24.217973   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.255457   17440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 06:53:24.293449   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.313141   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 06:53:24.332090   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:24.359168   17440 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1229 06:53:24.359453   17440 ssh_runner.go:195] Run: which cri-dockerd
	I1229 06:53:24.364136   17440 command_runner.go:130] > /usr/bin/cri-dockerd
	I1229 06:53:24.364255   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 06:53:24.377342   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 06:53:24.400807   17440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 06:53:24.632265   17440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 06:53:24.860401   17440 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 06:53:24.860544   17440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 06:53:24.885002   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 06:53:24.902479   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:25.138419   17440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 06:53:48.075078   17440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.936617903s)
	I1229 06:53:48.075181   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 06:53:48.109404   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 06:53:48.160259   17440 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 06:53:48.213352   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:48.231311   17440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 06:53:48.408709   17440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 06:53:48.584722   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.754219   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 06:53:48.798068   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 06:53:48.815248   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.983637   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 06:53:49.117354   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:49.139900   17440 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 06:53:49.139985   17440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 06:53:49.146868   17440 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1229 06:53:49.146900   17440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1229 06:53:49.146910   17440 command_runner.go:130] > Device: 0,23	Inode: 2092        Links: 1
	I1229 06:53:49.146918   17440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1229 06:53:49.146926   17440 command_runner.go:130] > Access: 2025-12-29 06:53:49.121969518 +0000
	I1229 06:53:49.146933   17440 command_runner.go:130] > Modify: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146940   17440 command_runner.go:130] > Change: 2025-12-29 06:53:49.012958222 +0000
	I1229 06:53:49.146947   17440 command_runner.go:130] >  Birth: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146986   17440 start.go:574] Will wait 60s for crictl version
	I1229 06:53:49.147040   17440 ssh_runner.go:195] Run: which crictl
	I1229 06:53:49.152717   17440 command_runner.go:130] > /usr/bin/crictl
	I1229 06:53:49.152823   17440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 06:53:49.184154   17440 command_runner.go:130] > Version:  0.1.0
	I1229 06:53:49.184179   17440 command_runner.go:130] > RuntimeName:  docker
	I1229 06:53:49.184183   17440 command_runner.go:130] > RuntimeVersion:  28.5.2
	I1229 06:53:49.184188   17440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1229 06:53:49.184211   17440 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 06:53:49.184266   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.212414   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.213969   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.257526   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.262261   17440 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 06:53:49.266577   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267255   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:49.267298   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267633   17440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 06:53:49.286547   17440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1229 06:53:49.286686   17440 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 06:53:49.286896   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:49.286965   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.324994   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.325029   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.325037   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.325045   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.325052   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.325060   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.325067   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.325074   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.325113   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.325127   17440 docker.go:624] Images already preloaded, skipping extraction
	I1229 06:53:49.325191   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.352256   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.352294   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.352301   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.352309   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.352315   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.352323   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.352349   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.352361   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.352398   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.352412   17440 cache_images.go:86] Images are preloaded, skipping loading
	I1229 06:53:49.352427   17440 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 06:53:49.352542   17440 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 06:53:49.352611   17440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 06:53:49.466471   17440 command_runner.go:130] > systemd
	I1229 06:53:49.469039   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:49.469084   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:49.469108   17440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 06:53:49.469137   17440 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 06:53:49.469275   17440 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 06:53:49.469338   17440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 06:53:49.495545   17440 command_runner.go:130] > kubeadm
	I1229 06:53:49.495573   17440 command_runner.go:130] > kubectl
	I1229 06:53:49.495580   17440 command_runner.go:130] > kubelet
	I1229 06:53:49.495602   17440 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 06:53:49.495647   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 06:53:49.521658   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 06:53:49.572562   17440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 06:53:49.658210   17440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1229 06:53:49.740756   17440 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 06:53:49.746333   17440 command_runner.go:130] > 192.168.39.121	control-plane.minikube.internal
	I1229 06:53:49.746402   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:50.073543   17440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:53:50.148789   17440 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 06:53:50.148837   17440 certs.go:195] generating shared ca certs ...
	I1229 06:53:50.148860   17440 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:53:50.149082   17440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 06:53:50.149152   17440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 06:53:50.149169   17440 certs.go:257] generating profile certs ...
	I1229 06:53:50.149320   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 06:53:50.149413   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 06:53:50.149478   17440 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 06:53:50.149490   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 06:53:50.149508   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 06:53:50.149525   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 06:53:50.149541   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 06:53:50.149556   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 06:53:50.149573   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 06:53:50.149588   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 06:53:50.149607   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 06:53:50.149673   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 06:53:50.149723   17440 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 06:53:50.149738   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 06:53:50.149776   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 06:53:50.149837   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 06:53:50.149873   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 06:53:50.149950   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:50.150003   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:50.150023   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 06:53:50.150038   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 06:53:50.150853   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 06:53:50.233999   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 06:53:50.308624   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 06:53:50.436538   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 06:53:50.523708   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 06:53:50.633239   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 06:53:50.746852   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 06:53:50.793885   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 06:53:50.894956   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 06:53:50.955149   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 06:53:51.018694   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 06:53:51.084938   17440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 06:53:51.127238   17440 ssh_runner.go:195] Run: openssl version
	I1229 06:53:51.136812   17440 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1229 06:53:51.136914   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.154297   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 06:53:51.175503   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182560   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182600   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182653   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.195355   17440 command_runner.go:130] > b5213941
	I1229 06:53:51.195435   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 06:53:51.217334   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.233542   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 06:53:51.248778   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255758   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255826   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255874   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.272983   17440 command_runner.go:130] > 51391683
	I1229 06:53:51.273077   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 06:53:51.303911   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.325828   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 06:53:51.347788   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360429   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360567   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360625   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.369235   17440 command_runner.go:130] > 3ec20f2e
	I1229 06:53:51.369334   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 06:53:51.381517   17440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387517   17440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387548   17440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1229 06:53:51.387554   17440 command_runner.go:130] > Device: 253,1	Inode: 1052441     Links: 1
	I1229 06:53:51.387560   17440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1229 06:53:51.387568   17440 command_runner.go:130] > Access: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387572   17440 command_runner.go:130] > Modify: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387577   17440 command_runner.go:130] > Change: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387581   17440 command_runner.go:130] >  Birth: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387657   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 06:53:51.396600   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.397131   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 06:53:51.410180   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.410283   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 06:53:51.419062   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.419164   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 06:53:51.431147   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.431222   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 06:53:51.441881   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.442104   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 06:53:51.450219   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.450295   17440 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:51.450396   17440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.474716   17440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 06:53:51.489086   17440 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1229 06:53:51.489107   17440 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1229 06:53:51.489113   17440 command_runner.go:130] > /var/lib/minikube/etcd:
	I1229 06:53:51.489117   17440 command_runner.go:130] > member
	I1229 06:53:51.489676   17440 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 06:53:51.489694   17440 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 06:53:51.489753   17440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 06:53:51.503388   17440 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:51.503948   17440 kubeconfig.go:125] found "functional-695625" server: "https://192.168.39.121:8441"
	I1229 06:53:51.504341   17440 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:51.504505   17440 kapi.go:59] client config for functional-695625: &rest.Config{Host:"https://192.168.39.121:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 06:53:51.504963   17440 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 06:53:51.504986   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 06:53:51.504992   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 06:53:51.504998   17440 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 06:53:51.505004   17440 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 06:53:51.505012   17440 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 06:53:51.505089   17440 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 06:53:51.505414   17440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 06:53:51.521999   17440 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.121
	I1229 06:53:51.522047   17440 kubeadm.go:1161] stopping kube-system containers ...
	I1229 06:53:51.522115   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.550376   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.550407   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:51.550415   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:51.550422   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:51.550432   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:51.550441   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:51.550448   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:51.550455   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:51.550462   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:51.550470   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:51.550478   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:51.550485   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:51.550491   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:51.550496   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:51.550505   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:51.550511   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:51.550516   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:51.550521   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:51.550528   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:51.550540   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:51.550548   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:51.550556   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:51.550566   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:51.550572   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:51.550582   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:51.552420   17440 docker.go:487] Stopping containers: [6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629]
	I1229 06:53:51.552499   17440 ssh_runner.go:195] Run: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629
	I1229 06:53:51.976888   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.976911   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:58.789216   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:58.789240   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:58.789248   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:58.789252   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:58.789256   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:58.789259   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:58.789262   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:58.789266   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:58.789269   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:58.789272   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:58.789275   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:58.789278   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:58.789281   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:58.789284   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:58.789287   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:58.789295   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:58.789299   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:58.789303   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:58.789306   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:58.789310   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:58.789314   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:58.789317   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:58.789321   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:58.790986   17440 ssh_runner.go:235] Completed: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629: (7.238443049s)
	I1229 06:53:58.791057   17440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 06:53:58.833953   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857522   17440 command_runner.go:130] > -rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	I1229 06:53:58.857550   17440 command_runner.go:130] > -rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.857561   17440 command_runner.go:130] > -rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.857571   17440 command_runner.go:130] > -rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857610   17440 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	
	I1229 06:53:58.857671   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:53:58.875294   17440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1229 06:53:58.876565   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.896533   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.896617   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:53:58.917540   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.936703   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.936777   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.957032   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.970678   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.970742   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:53:58.992773   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:53:59.007767   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.061402   17440 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 06:53:59.061485   17440 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1229 06:53:59.061525   17440 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1229 06:53:59.061923   17440 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 06:53:59.062217   17440 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1229 06:53:59.062329   17440 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1229 06:53:59.062606   17440 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1229 06:53:59.062852   17440 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1229 06:53:59.062948   17440 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1229 06:53:59.063179   17440 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 06:53:59.063370   17440 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 06:53:59.063615   17440 command_runner.go:130] > [certs] Using the existing "sa" key
	I1229 06:53:59.066703   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.686012   17440 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 06:53:59.686050   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1229 06:53:59.686059   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1229 06:53:59.686069   17440 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 06:53:59.686078   17440 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 06:53:59.686087   17440 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 06:53:59.686203   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.995495   17440 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 06:53:59.995529   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 06:53:59.995539   17440 command_runner.go:130] > [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 06:53:59.995545   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 06:53:59.995549   17440 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1229 06:53:59.995615   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.047957   17440 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 06:54:00.047983   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 06:54:00.053966   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 06:54:00.056537   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 06:54:00.059558   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.175745   17440 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 06:54:00.175825   17440 api_server.go:52] waiting for apiserver process to appear ...
	I1229 06:54:00.175893   17440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:00.233895   17440 command_runner.go:130] > 2416
	I1229 06:54:00.233940   17440 api_server.go:72] duration metric: took 58.126409ms to wait for apiserver process to appear ...
	I1229 06:54:00.233953   17440 api_server.go:88] waiting for apiserver healthz status ...
	I1229 06:54:00.233976   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:05.236821   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:05.236865   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:10.239922   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:10.239956   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:15.242312   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:15.242347   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:20.245667   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:20.245726   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:25.248449   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:25.248501   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:30.249241   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:30.249279   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:35.251737   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:35.251771   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:40.254366   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:40.254407   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:45.257232   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:45.257275   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:50.259644   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:50.259685   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:55.261558   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:55.261592   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:55:00.263123   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:55:00.263241   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:55:00.287429   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:55:00.288145   17440 logs.go:282] 1 containers: [fb6db97d8ffe]
	I1229 06:55:00.288289   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:55:00.310519   17440 command_runner.go:130] > d81259f64136
	I1229 06:55:00.310561   17440 logs.go:282] 1 containers: [d81259f64136]
	I1229 06:55:00.310630   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:55:00.334579   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:55:00.334624   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:55:00.334692   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:55:00.353472   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:55:00.353503   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:55:00.354626   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:55:00.354714   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:55:00.376699   17440 command_runner.go:130] > 8911777281f4
	I1229 06:55:00.378105   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:55:00.378188   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:55:00.397976   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:55:00.399617   17440 logs.go:282] 1 containers: [17fe16a2822a]
	I1229 06:55:00.399707   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:55:00.419591   17440 logs.go:282] 0 containers: []
	W1229 06:55:00.419617   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:55:00.419665   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:55:00.440784   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:55:00.441985   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:55:00.442020   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:55:00.442030   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:55:00.465151   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.465192   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:55:00.465226   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.465237   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:55:00.465255   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.465271   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:55:00.465285   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.465823   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:55:00.465845   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:55:00.487618   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:55:00.487646   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:55:00.508432   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.508468   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:55:00.508482   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:55:00.508508   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:55:00.508521   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:55:00.508529   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.508541   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:55:00.508551   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.508560   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:55:00.508568   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.510308   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:55:00.510337   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:55:00.531862   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.532900   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:55:00.532924   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:55:00.554051   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:55:00.554084   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.554095   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:55:00.554109   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:55:00.554131   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:55:00.554148   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:55:00.554170   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:55:00.554189   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:55:00.554195   17440 command_runner.go:130] !  >
	I1229 06:55:00.554208   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:55:00.554224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:55:00.554250   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:55:00.554261   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:55:00.554273   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.554316   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:55:00.554327   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:55:00.554339   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:55:00.554350   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:55:00.554366   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:55:00.554381   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:55:00.554390   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:55:00.554402   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:55:00.554414   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:55:00.554427   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:55:00.554437   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:55:00.554452   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:55:00.556555   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:55:00.556578   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:55:00.581812   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:55:00.581848   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:55:00.581857   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:55:00.581865   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581874   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581881   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:55:00.581890   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581911   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:55:00.581919   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581930   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581942   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581949   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581957   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581964   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581975   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581985   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581993   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582003   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582010   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582020   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582030   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582037   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582044   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582051   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582070   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582080   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582088   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582097   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582105   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582115   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582125   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582141   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582152   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582160   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582170   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582177   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582186   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582193   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582203   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582211   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582221   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582228   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582235   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582242   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582252   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582261   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582269   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582276   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582287   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582294   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582302   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582312   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582319   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582329   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582336   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582346   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582353   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582363   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582370   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582378   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582385   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.586872   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:55:00.586916   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:55:00.609702   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.609731   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.609766   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.609784   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.609811   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.609822   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:55:00.609831   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:55:00.609842   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609848   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.609857   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:55:00.609865   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.609879   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.609890   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.609906   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.609915   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.609923   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:55:00.609943   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.609954   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:55:00.609966   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.609976   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.609983   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:55:00.609990   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609998   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610006   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610016   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610024   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610041   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610050   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610070   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610082   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610091   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610100   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610107   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:55:00.610115   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610123   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610131   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610141   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610159   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610168   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610179   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.610191   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.610203   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610216   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610223   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610231   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610242   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:55:00.610251   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610258   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610265   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610271   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:55:00.610290   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610303   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610323   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610335   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610345   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610355   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610374   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610384   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610394   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610404   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610412   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610422   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610429   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610439   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610447   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610455   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610470   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.610476   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610483   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610491   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610500   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610508   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610516   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.610523   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610531   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610538   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610550   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610559   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610567   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610573   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610579   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610595   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610607   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:55:00.610615   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610622   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610630   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610637   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610644   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:55:00.610653   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610669   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610680   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610692   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610705   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610713   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610735   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610744   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610755   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610765   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610772   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610781   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610789   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610809   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610818   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610824   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610853   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610867   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610881   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610896   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610909   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:55:00.610922   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610936   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610949   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610964   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610979   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.610995   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611010   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611021   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611037   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.611048   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.611062   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.611070   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.611079   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:55:00.611087   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.611096   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.611102   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.611109   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:55:00.611118   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.611125   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:55:00.611135   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.611146   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.611157   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.611167   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.611179   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.611186   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:55:00.611199   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611226   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611266   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611281   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611295   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611310   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611325   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611342   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611355   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611370   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611382   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611404   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.611417   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:55:00.611435   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:55:00.611449   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:55:00.611464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:55:00.611476   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:55:00.611491   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:55:00.611502   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:55:00.611517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:55:00.611529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:55:00.611544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:55:00.611558   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:55:00.611574   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:55:00.611586   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:55:00.611601   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:55:00.611617   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:55:00.611631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:55:00.611645   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:55:00.611660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:55:00.611674   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:55:00.611689   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:55:00.611702   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:55:00.611712   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:55:00.611722   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.611732   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.611740   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.611751   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.611759   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.611767   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.611835   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.611849   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.611867   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:55:00.611877   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611888   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.611894   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.611901   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:55:00.611909   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611917   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.611929   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.611937   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.611946   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.611954   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.611963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.611971   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.611981   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.611990   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.611999   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.612006   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.612019   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612031   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612046   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612063   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612079   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612093   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612112   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:55:00.612128   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612142   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612157   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612171   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612185   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612201   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612217   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612230   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612245   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612259   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612274   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612293   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:55:00.612309   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612323   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:55:00.612338   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:55:00.612354   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612366   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612380   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:55:00.612394   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.612407   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:55:00.629261   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:55:00.629293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:55:00.671242   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:55:00.671279   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       About a minute ago   Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671293   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:55:00.671303   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       About a minute ago   Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:55:00.671315   17440 command_runner.go:130] > fb6db97d8ffe4       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            1                   4ed2797334771       kube-apiserver-functional-695625            kube-system
	I1229 06:55:00.671327   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:55:00.671337   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       About a minute ago   Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671347   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:55:00.671362   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       2 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:55:00.673604   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:55:00.673628   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:55:00.695836   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077121    2634 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.695863   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077418    2634 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.695877   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077955    2634 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.695887   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.109084    2634 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.695901   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.135073    2634 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.695910   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137245    2634 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.695920   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137294    2634 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.695934   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.137340    2634 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.695942   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209773    2634 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.695952   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209976    2634 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.695962   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210050    2634 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.695975   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210361    2634 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.696001   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210374    2634 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.696011   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210392    2634 policy_none.go:50] "Start"
	I1229 06:55:00.696020   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210408    2634 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.696029   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210421    2634 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696038   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210527    2634 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696045   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210534    2634 policy_none.go:44] "Start"
	I1229 06:55:00.696056   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.219245    2634 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.696067   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220437    2634 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.696078   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220456    2634 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.696089   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.221071    2634 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.696114   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.226221    2634 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.696126   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239387    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696144   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239974    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696155   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.240381    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696165   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.262510    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696185   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283041    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696208   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283087    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696228   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283118    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696247   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283135    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696268   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283151    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696288   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283163    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696309   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283175    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696329   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283189    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696357   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283209    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696378   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283223    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696400   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283249    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696416   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.285713    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696428   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290012    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696442   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290269    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696454   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.304300    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696466   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.336817    2634 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.696475   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351321    2634 kubelet_node_status.go:123] "Node was previously registered" node="functional-695625"
	I1229 06:55:00.696486   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351415    2634 kubelet_node_status.go:77] "Successfully registered node" node="functional-695625"
	I1229 06:55:00.696493   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.033797    2634 apiserver.go:52] "Watching apiserver"
	I1229 06:55:00.696503   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.077546    2634 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1229 06:55:00.696527   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.181689    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-functional-695625" podStartSLOduration=3.181660018 podStartE2EDuration="3.181660018s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.180947341 +0000 UTC m=+1.223544146" watchObservedRunningTime="2025-12-29 06:52:42.181660018 +0000 UTC m=+1.224256834"
	I1229 06:55:00.696555   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.221952    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-functional-695625" podStartSLOduration=3.221936027 podStartE2EDuration="3.221936027s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.202120755 +0000 UTC m=+1.244717560" watchObservedRunningTime="2025-12-29 06:52:42.221936027 +0000 UTC m=+1.264532905"
	I1229 06:55:00.696583   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238774    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-695625" podStartSLOduration=3.238759924 podStartE2EDuration="3.238759924s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.238698819 +0000 UTC m=+1.281295638" watchObservedRunningTime="2025-12-29 06:52:42.238759924 +0000 UTC m=+1.281356744"
	I1229 06:55:00.696609   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238905    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-functional-695625" podStartSLOduration=3.238868136 podStartE2EDuration="3.238868136s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.224445467 +0000 UTC m=+1.267042290" watchObservedRunningTime="2025-12-29 06:52:42.238868136 +0000 UTC m=+1.281464962"
	I1229 06:55:00.696622   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266475    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696634   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266615    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696651   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266971    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696664   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.267487    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696678   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287234    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696690   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287316    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696704   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.292837    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696718   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293863    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696730   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293764    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696745   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.294163    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696757   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298557    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696770   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298633    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696782   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.272537    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696807   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273148    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696835   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273501    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696850   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273627    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696863   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279056    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696877   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279353    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696887   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.754123    2634 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1229 06:55:00.696899   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.756083    2634 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1229 06:55:00.696917   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.407560    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mg5\" (UniqueName: \"kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696938   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408503    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-proxy\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696958   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408957    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-xtables-lock\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696976   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.409131    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-lib-modules\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696991   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528153    2634 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697004   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528186    2634 projected.go:196] Error preparing data for projected volume kube-api-access-94mg5 for pod kube-system/kube-proxy-g7lp9: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697032   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528293    2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5 podName:9c2c2ac1-7fa0-427d-b78e-ee14e169895a nodeName:}" failed. No retries permitted until 2025-12-29 06:52:46.028266861 +0000 UTC m=+5.070863673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94mg5" (UniqueName: "kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5") pod "kube-proxy-g7lp9" (UID: "9c2c2ac1-7fa0-427d-b78e-ee14e169895a") : configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697044   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:55:00.697064   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697084   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697104   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697124   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697138   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:55:00.697151   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.697170   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697192   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697206   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697229   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:55:00.697245   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697268   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:55:00.697296   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:55:00.697310   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697322   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697336   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697361   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:55:00.697376   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.697388   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697402   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.697420   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697442   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697463   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:55:00.697483   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:55:00.697499   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697515   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697526   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697536   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697554   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697576   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697591   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:55:00.697608   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697621   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697637   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697651   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697669   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697710   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697726   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697746   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697762   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697775   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697790   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697815   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697830   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697846   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697860   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697876   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697892   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697906   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697923   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697937   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697951   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697966   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697986   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698002   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698029   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698046   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698060   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698081   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698097   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698113   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698126   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698141   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698155   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698172   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698187   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698202   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698218   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698235   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698274   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698293   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698309   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698325   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698341   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698362   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698378   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698395   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698408   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698424   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698439   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698455   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698469   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698484   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698501   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698514   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698527   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698541   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698554   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698577   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698590   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698606   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698620   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698634   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698650   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698666   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698682   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698696   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698711   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698727   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698743   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698756   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698769   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698784   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698808   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698823   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698840   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698853   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698868   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698886   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698903   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698916   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698933   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698948   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698962   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698976   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698993   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:55:00.699007   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.699018   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699031   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699042   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699067   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.699078   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699105   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699119   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699145   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699157   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:55:00.699195   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699207   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:55:00.699224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:55:00.699256   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699269   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699284   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.699330   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699343   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:55:00.699362   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699380   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.699407   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:55:00.699439   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:55:00.699460   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:55:00.699477   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699497   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699515   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:55:00.699533   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.699619   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699640   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699709   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699722   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.699738   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699750   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.699763   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699774   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.699785   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699807   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.699820   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699834   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699846   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699861   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699872   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699886   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699931   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699946   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699956   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:55:00.699972   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700008   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700031   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700053   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700067   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.700078   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.700091   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:55:00.700102   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700116   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.700129   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700139   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700159   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700168   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:55:00.700179   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.700190   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700199   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700217   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700228   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700240   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.700250   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:55:00.700268   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700281   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700291   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700310   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700321   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700331   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700349   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700364   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700375   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700394   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700405   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700415   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700427   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700454   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700474   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700515   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700529   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700539   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700558   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700570   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700578   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:55:00.700584   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:55:00.700590   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700597   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:55:00.700603   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700612   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:55:00.700620   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.700631   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:55:00.700641   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:55:00.700652   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:55:00.700662   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:55:00.700674   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700684   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:55:00.700696   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:55:00.700707   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:55:00.700717   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:55:00.700758   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:55:00.700770   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:55:00.700779   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:55:00.700790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:55:00.700816   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:55:00.700831   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:55:00.700846   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:55:00.700858   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:55:00.700866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:55:00.700879   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:55:00.700891   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:55:00.700905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:55:00.700912   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:55:00.700921   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:55:00.700932   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.700943   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:55:00.700951   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:55:00.700963   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:55:00.700971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:55:00.700986   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:55:00.701000   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:55:00.701008   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:55:00.701020   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.701037   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.701046   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:55:00.701061   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.701073   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.701082   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.701093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.701100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.701114   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.701124   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701143   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701170   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.701178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.701188   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.701201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.701210   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.701218   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:55:00.701226   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.701237   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701246   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701256   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:55:00.701266   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.701277   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.701287   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.701297   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.701308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.701322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701334   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701348   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701361   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.701372   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.701385   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.701399   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.701410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.701422   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.701433   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701447   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.701458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.701471   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.701483   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.701496   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.701508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.701521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.701533   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701550   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701567   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:55:00.701581   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701592   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701611   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:55:00.701625   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701642   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:55:00.701678   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:55:00.701695   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:55:00.701705   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701716   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701735   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:55:00.701749   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701764   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:55:00.701780   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:55:00.701807   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701827   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701847   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701867   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701886   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.701907   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701928   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701971   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701995   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702014   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702027   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.755255   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:55:00.755293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:55:00.771031   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:55:00.771066   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:55:00.771079   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:55:00.771088   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:55:00.771097   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:55:00.771103   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:55:00.771109   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:55:00.771116   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:55:00.771121   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:55:00.771126   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:55:00.771131   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:55:00.771136   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:55:00.771143   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771153   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:55:00.771158   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:55:00.771165   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771175   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771185   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771191   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:55:00.771196   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:55:00.771202   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:55:00.772218   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:55:00.772246   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:56:00.863293   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:56:00.863340   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.091082059s)
	W1229 06:56:00.863385   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:56:00.863402   17440 logs.go:123] Gathering logs for kube-apiserver [fb6db97d8ffe] ...
	I1229 06:56:00.863420   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6db97d8ffe"
	I1229 06:56:00.897112   17440 command_runner.go:130] ! I1229 06:53:50.588377       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:56:00.897142   17440 command_runner.go:130] ! I1229 06:53:50.597275       1 server.go:150] Version: v1.35.0
	I1229 06:56:00.897153   17440 command_runner.go:130] ! I1229 06:53:50.597323       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:00.897164   17440 command_runner.go:130] ! E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:56:00.898716   17440 logs.go:138] Found kube-apiserver [fb6db97d8ffe] problem: E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.898738   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:00.898750   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:00.935530   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938590   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:00.938653   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:00.938666   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:00.938679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938689   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:00.938712   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.938728   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:00.938838   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:00.938875   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:00.938892   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:00.938902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:00.938913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:00.938922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:00.938935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:00.938946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:00.938958   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:00.938969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:00.938978   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:00.938993   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:00.939003   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:00.939022   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:00.939035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:00.939046   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:00.939053   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:00.939062   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:00.939071   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:00.939081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:00.939091   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:00.939111   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:00.939126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:00.939142   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:00.939162   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.939181   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:00.939213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.939249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:00.939258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:00.939274   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:00.939289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:00.939302   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:00.939324   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:00.939342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.939352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:00.939362   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:00.939377   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:00.939389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:00.939404   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:00.939423   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:00.939439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:00.939458   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939467   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:00.939478   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:00.939528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939544   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939564   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:00.939586   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939603   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939616   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:00.939882   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:00.939915   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939932   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:00.939947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:00.939960   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:00.939998   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.940030   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940064   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940122   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940150   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.940167   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:00.940187   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940204   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940257   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940277   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:00.940301   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940334   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940371   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940425   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940447   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940473   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.955065   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955108   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:56:00.955188   17440 out.go:285] X Problems detected in kube-apiserver [fb6db97d8ffe]:
	W1229 06:56:00.955202   17440 out.go:285]   E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.955209   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955215   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:56:10.957344   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:56:15.961183   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:56:15.961319   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:56:15.981705   17440 command_runner.go:130] > 18d0015c724a
	I1229 06:56:15.982641   17440 logs.go:282] 1 containers: [18d0015c724a]
	I1229 06:56:15.982732   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:56:16.002259   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:56:16.002292   17440 command_runner.go:130] > d81259f64136
	I1229 06:56:16.002322   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:56:16.002399   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:56:16.021992   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:56:16.022032   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:56:16.022113   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:56:16.048104   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:56:16.048133   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:56:16.049355   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:56:16.049441   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:56:16.071523   17440 command_runner.go:130] > 8911777281f4
	I1229 06:56:16.072578   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:56:16.072668   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:56:16.092921   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:56:16.092948   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:56:16.092975   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:56:16.093047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:56:16.113949   17440 logs.go:282] 0 containers: []
	W1229 06:56:16.113983   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:56:16.114047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:56:16.135700   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:56:16.135739   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:56:16.135766   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:56:16.135786   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:56:16.152008   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:56:16.152038   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:56:16.152046   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:56:16.152054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:56:16.152063   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:56:16.152069   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:56:16.152076   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:56:16.152081   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:56:16.152086   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:56:16.152091   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:56:16.152096   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:56:16.152102   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:56:16.152107   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152112   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:56:16.152119   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:56:16.152128   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152148   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152164   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152180   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:56:16.152190   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:56:16.152201   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:56:16.152209   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:56:16.152217   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:56:16.153163   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:56:16.153192   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:56:16.174824   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:56:16.174856   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.174862   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:56:16.174873   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:56:16.174892   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:56:16.174900   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:56:16.174913   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:56:16.174920   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:56:16.174924   17440 command_runner.go:130] !  >
	I1229 06:56:16.174931   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:56:16.174941   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:56:16.174957   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:56:16.174966   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:56:16.174975   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.174985   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:56:16.174994   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:56:16.175003   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:56:16.175012   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:56:16.175024   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:56:16.175033   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:56:16.175040   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:56:16.175050   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:56:16.175074   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:56:16.175325   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:56:16.175351   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:56:16.175362   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:56:16.177120   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:56:16.177144   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:56:16.222627   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:56:16.222665   17440 command_runner.go:130] > 18d0015c724a8       5c6acd67e9cd1       5 seconds ago       Exited              kube-apiserver            3                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:56:16.222684   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       16 seconds ago      Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222707   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       16 seconds ago      Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:56:16.222730   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       2 minutes ago       Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222749   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       2 minutes ago       Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:56:16.222768   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       2 minutes ago       Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:56:16.222810   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       2 minutes ago       Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222831   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       2 minutes ago       Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222851   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:56:16.222879   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       3 minutes ago       Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:56:16.225409   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:56:16.225439   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:56:16.247416   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247449   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.247516   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.247533   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.247545   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247555   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.247582   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.247605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.247698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.247722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:56:16.247733   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.247745   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:56:16.247753   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:56:16.247762   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.247774   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.247782   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.247807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:56:16.247816   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.247825   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.247840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.247851   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.247867   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.247878   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.247886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.247893   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.247902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.247914   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:56:16.247924   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:56:16.247935   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.247952   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.247965   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.247975   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.247988   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.248000   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.248016   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.248035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.248063   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.248074   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.248083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.248092   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.248103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.248113   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.248126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.248136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.248153   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.248166   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.248177   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:56:16.248186   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:56:16.248198   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.248208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248219   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:56:16.248228   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248240   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:56:16.248260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.248287   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248295   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248304   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.248312   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.248320   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248341   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.248352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.254841   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:16.254869   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:16.278647   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.278723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.278736   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.278750   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278759   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.278780   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.278809   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.278890   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.278913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:16.278923   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.278935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:16.278946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:16.278957   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.278971   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.278982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.278996   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:16.279006   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.279014   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.279031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.279040   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.279072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.279083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.279091   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.279101   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.279110   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.279121   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:16.279132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:16.279142   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.279159   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.279173   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.279183   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.279195   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.279226   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.279249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.279260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.279275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.279289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.279300   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.279313   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.279322   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279332   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.279343   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.279359   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.279374   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.279386   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:16.279396   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:16.279406   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:16.279418   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279429   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:16.279439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279451   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279460   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:16.279469   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279479   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.279503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279523   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.279531   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.279541   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279551   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.279562   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.279570   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:16.279585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.279603   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279622   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279661   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279676   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279688   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:16.279698   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279711   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279730   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279741   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:16.279751   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279764   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279785   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279805   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279825   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279836   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279852   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.287590   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:56:16.287613   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:56:16.310292   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:56:16.310320   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:56:16.331009   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:56:16.331044   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:56:16.331054   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:56:16.331067   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331076   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331083   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:56:16.331093   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331114   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:56:16.331232   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331256   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331268   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331275   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331289   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331298   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331316   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331329   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331341   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331355   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331363   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331374   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331386   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331400   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331413   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331425   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331441   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331454   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331468   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331478   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331488   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331496   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331506   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331519   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331529   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331537   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331547   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331555   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331564   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331572   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331580   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331592   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331604   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331618   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331629   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331645   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331659   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331673   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331689   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331703   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331716   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331728   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331740   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331756   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331771   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331784   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331816   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331830   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331847   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331863   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331879   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331894   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331908   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.336243   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:56:16.336267   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:56:16.358115   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358145   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358155   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358165   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358177   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.358186   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:56:16.358194   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:56:16.358203   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358209   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358220   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:56:16.358229   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358241   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358254   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358266   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358278   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358285   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358307   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358315   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358328   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358336   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358343   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:56:16.358350   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358360   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358369   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358377   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358385   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358399   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358408   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358415   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358425   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358436   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358445   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358455   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:56:16.358463   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358474   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358481   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358491   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358500   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358508   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358515   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358530   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.358543   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.358555   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358576   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358584   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.358593   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.358604   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:56:16.358614   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.358621   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.358628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.358635   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358644   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:56:16.358653   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358666   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358685   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358697   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358707   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358716   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358735   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358745   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358755   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358763   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358805   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358818   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358827   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358837   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358847   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358854   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358861   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358867   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.358874   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358893   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358904   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358913   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358921   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.358930   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358942   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358950   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358959   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358970   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358979   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358986   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358992   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359001   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359011   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:56:16.359021   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359029   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359036   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359042   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359052   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:56:16.359060   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359071   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359084   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359106   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359113   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359135   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359144   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:56:16.359154   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.359164   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.359172   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.359182   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.359190   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.359198   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.359206   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.359213   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.359244   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359260   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359275   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359288   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359300   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:56:16.359313   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359328   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359343   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359357   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359372   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359386   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359399   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359410   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359422   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.359435   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.359442   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359452   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359460   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:56:16.359468   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359474   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359481   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359487   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:56:16.359494   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359502   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:56:16.359511   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359521   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359532   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359544   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359553   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359561   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359574   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359590   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359602   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359617   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359630   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359646   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359660   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359676   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359689   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359706   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359719   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359731   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359744   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359763   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359779   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:56:16.359800   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:56:16.359813   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:56:16.359827   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:56:16.359837   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:56:16.359852   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:56:16.359864   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:56:16.359878   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:56:16.359890   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:56:16.359904   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:56:16.359916   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:56:16.359932   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:56:16.359945   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:56:16.359960   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:56:16.359975   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:56:16.359988   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:56:16.360003   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:56:16.360019   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:56:16.360037   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:56:16.360051   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:56:16.360064   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:56:16.360074   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:56:16.360085   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.360093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.360102   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.360113   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.360121   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.360130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.360163   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.360172   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.360189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:56:16.360197   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.360210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360218   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:56:16.360225   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360236   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.360245   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.360255   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.360263   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.360271   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.360280   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.360288   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.360297   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.360308   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.360317   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.360326   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360338   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360353   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360365   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360380   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360392   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360410   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360426   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:56:16.360441   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360454   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360467   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360482   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360494   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360510   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360525   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360538   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360553   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360566   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360582   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360599   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:56:16.360617   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360628   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:56:16.360643   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:56:16.360656   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360671   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360682   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:56:16.360699   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360711   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360726   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360736   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360749   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360762   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.377860   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:56:16.377891   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:56:16.394828   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:56:16.394877   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394896   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394920   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394952   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394976   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:56:16.394988   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.395012   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395045   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395075   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395109   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:56:16.395143   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395179   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:56:16.395221   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:56:16.395242   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395263   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395283   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395324   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:56:16.395347   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.395368   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395390   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.395423   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395451   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395496   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:56:16.395529   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:56:16.395551   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395578   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395597   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395618   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395641   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395678   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395702   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:56:16.395719   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395743   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395770   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395806   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395831   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395859   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395885   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395903   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395929   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.395956   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395981   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396000   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396027   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396052   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396087   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396114   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396138   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396161   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396186   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396267   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396295   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396315   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396339   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396362   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396387   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396413   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396433   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396458   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396484   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396508   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396526   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396550   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396585   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396609   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396634   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396662   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396687   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396711   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396739   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396763   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396786   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396821   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396848   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396872   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396891   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396919   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396943   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396966   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396989   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397016   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397064   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397089   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397114   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397139   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397161   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397187   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397211   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397233   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397256   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397281   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397307   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397330   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397358   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397387   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397424   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397450   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397477   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397500   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397521   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397544   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397571   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397594   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397618   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397644   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397668   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397686   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397742   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397766   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397786   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397818   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397849   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397872   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397897   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397918   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:56:16.397940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.397961   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.397984   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.398006   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398027   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.398047   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.398071   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398100   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398122   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398141   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398162   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398209   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398244   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:56:16.398272   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:56:16.398317   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398350   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:56:16.398371   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398394   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398413   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398456   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.398481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398498   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:56:16.398525   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398557   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.398599   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:56:16.398632   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:56:16.398661   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:56:16.398683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398746   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:56:16.398769   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.398813   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398843   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398873   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398910   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398942   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.398985   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399007   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.399028   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399052   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.399082   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399104   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.399121   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399145   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399170   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399191   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399209   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399231   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399253   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399275   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399295   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:56:16.399309   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399328   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.399366   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399402   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399416   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.399427   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.399440   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:56:16.399454   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399467   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.399491   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399517   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399553   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399565   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:56:16.399576   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.399588   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399598   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399618   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399629   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399640   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.399653   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:56:16.399671   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399684   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399694   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399724   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399741   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399752   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399771   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399782   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399801   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399822   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399834   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399845   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399857   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399866   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399885   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399928   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.400087   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.400109   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.400130   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.400140   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400147   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:56:16.400153   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:56:16.400162   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400169   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:56:16.400175   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400184   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:56:16.400193   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.400201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:56:16.400213   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:56:16.400222   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:56:16.400233   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:56:16.400243   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400253   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:56:16.400262   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:56:16.400272   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:56:16.400281   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:56:16.400693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:56:16.400713   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:56:16.400724   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:56:16.400734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:56:16.400742   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:56:16.400751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:56:16.400760   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:56:16.400768   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:56:16.400780   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:56:16.400812   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:56:16.400833   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:56:16.400853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:56:16.400868   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:56:16.400877   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:56:16.400887   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.400896   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:56:16.400903   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:56:16.400915   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:56:16.400924   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:56:16.400936   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:56:16.400950   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:56:16.400961   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:56:16.400972   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.400985   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:56:16.400993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:56:16.401003   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:56:16.401016   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:56:16.401027   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:56:16.401036   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:56:16.401045   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:56:16.401053   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:56:16.401070   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:56:16.401083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.401100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401132   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:56:16.401141   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:56:16.401150   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:56:16.401160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:56:16.401173   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:56:16.401180   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:56:16.401189   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:56:16.401198   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401209   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401217   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:56:16.401228   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:56:16.401415   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:56:16.401435   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:56:16.401444   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:56:16.401456   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:56:16.401467   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401486   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401529   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.401553   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.401575   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.401589   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.401602   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.401614   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.401628   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401640   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.401653   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.401667   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.401679   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.401693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.401706   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.401720   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.401733   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401745   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.401762   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:56:16.401816   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401840   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401871   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:56:16.401900   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401920   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:56:16.401958   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.401977   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.401987   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402002   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402019   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:56:16.402033   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402048   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:56:16.402065   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402085   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402107   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402134   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402169   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402204   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.402228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402250   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402272   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402294   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402314   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402335   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402349   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402367   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:56:16.402405   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402421   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402433   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402444   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402530   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402557   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402569   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402585   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402600   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402639   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402655   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402666   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402677   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402697   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.402714   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402726   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402737   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402752   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:56:16.402917   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.402934   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402947   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.402959   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.402972   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.402996   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403011   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403026   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403043   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403056   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403070   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403082   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403096   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403110   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403125   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403138   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403152   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403292   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403310   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403325   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403339   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403361   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403376   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403389   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403402   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403417   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403428   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403450   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403464   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.403480   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403495   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403506   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403636   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403671   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403686   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403702   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403720   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403739   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403753   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403767   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403780   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403806   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403820   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403833   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403850   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403871   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403890   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403914   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403936   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403952   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.403976   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403994   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404007   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.404022   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404034   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.404046   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.404066   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.404085   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.404122   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.454878   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:56:16.454917   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:56:16.478085   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.478126   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:56:16.478136   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:56:16.478148   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:56:16.478155   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:56:16.478166   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.478175   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:56:16.478185   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.478194   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:56:16.478203   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.478825   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:56:16.478843   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:56:16.501568   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.501592   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.501601   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.501610   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.501623   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.501630   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.501636   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.501982   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:56:16.501996   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:56:16.524487   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.524514   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.524523   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.524767   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.524788   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.524805   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.524812   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.526406   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:56:16.526437   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:57:16.604286   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:57:16.606268   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.079810784s)
	W1229 06:57:16.606306   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:57:16.606317   17440 logs.go:123] Gathering logs for kube-apiserver [18d0015c724a] ...
	I1229 06:57:16.606331   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d0015c724a"
	I1229 06:57:16.636305   17440 command_runner.go:130] ! Error response from daemon: No such container: 18d0015c724a
	W1229 06:57:16.636367   17440 logs.go:130] failed kube-apiserver [18d0015c724a]: command: /bin/bash -c "docker logs --tail 400 18d0015c724a" /bin/bash -c "docker logs --tail 400 18d0015c724a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 18d0015c724a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 18d0015c724a
	
	** /stderr **
	I1229 06:57:16.636376   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:57:16.636391   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:57:16.657452   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:19.160135   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:57:24.162053   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:57:24.162161   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:57:24.182182   17440 command_runner.go:130] > b206d555ad19
	I1229 06:57:24.183367   17440 logs.go:282] 1 containers: [b206d555ad19]
	I1229 06:57:24.183464   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:57:24.206759   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:57:24.206821   17440 command_runner.go:130] > d81259f64136
	I1229 06:57:24.206853   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:57:24.206926   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:57:24.228856   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:57:24.228897   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:57:24.228968   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:57:24.247867   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:57:24.247890   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:57:24.249034   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:57:24.249130   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:57:24.268209   17440 command_runner.go:130] > 8911777281f4
	I1229 06:57:24.269160   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:57:24.269243   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:57:24.288837   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:57:24.288871   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:57:24.290245   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:57:24.290337   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:57:24.312502   17440 logs.go:282] 0 containers: []
	W1229 06:57:24.312531   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:57:24.312592   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:57:24.334811   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:57:24.334849   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:57:24.334875   17440 logs.go:123] Gathering logs for kube-apiserver [b206d555ad19] ...
	I1229 06:57:24.334888   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b206d555ad19"
	I1229 06:57:24.357541   17440 command_runner.go:130] ! I1229 06:57:22.434262       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:57:24.357567   17440 command_runner.go:130] ! I1229 06:57:22.436951       1 server.go:150] Version: v1.35.0
	I1229 06:57:24.357577   17440 command_runner.go:130] ! I1229 06:57:22.436991       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.357602   17440 command_runner.go:130] ! E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:57:24.359181   17440 logs.go:138] Found kube-apiserver [b206d555ad19] problem: E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:57:24.359206   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:57:24.359218   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:57:24.381077   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:24.381103   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:57:24.381113   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.381121   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:57:24.381131   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.381137   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:57:24.381144   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:57:24.382680   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:57:24.382711   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:57:24.427354   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:57:24.427382   17440 command_runner.go:130] > b206d555ad194       5c6acd67e9cd1       2 seconds ago        Exited              kube-apiserver            5                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:57:24.427400   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427411   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       About a minute ago   Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:57:24.427421   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       3 minutes ago        Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427441   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       3 minutes ago        Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:57:24.427454   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       3 minutes ago        Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:57:24.427465   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       3 minutes ago        Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427477   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       3 minutes ago        Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427488   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:57:24.427509   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       4 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:57:24.430056   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:57:24.430095   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:57:24.453665   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453712   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453738   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453770   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453809   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453838   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453867   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453891   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453911   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453928   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453945   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453961   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453974   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454002   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454022   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454058   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454074   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454087   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454103   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454120   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454135   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454149   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454165   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454179   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454194   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454208   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454224   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454246   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454262   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454276   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454294   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454310   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454326   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454342   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454358   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454371   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454386   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454401   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454423   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454447   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454472   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454500   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454519   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454533   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454549   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454565   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454579   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454593   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454608   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454625   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454640   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454655   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:57:24.454667   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.454680   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.454697   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.454714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.454729   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.454741   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.454816   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454842   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454855   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454870   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454881   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.454896   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454912   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:57:24.454957   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454969   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:57:24.454987   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455012   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:57:24.455025   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.455039   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455081   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455097   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.455110   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:57:24.455125   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455144   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.455165   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:57:24.455186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:57:24.455204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:57:24.455224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455275   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:57:24.455294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455326   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455345   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455366   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455386   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455404   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.455423   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455446   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.455472   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455490   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.455506   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455528   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.455550   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455573   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455588   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455603   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455615   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.455628   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455640   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455657   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455669   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:57:24.455681   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455699   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.455720   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455739   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455750   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.455810   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.455823   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:57:24.455835   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455848   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.455860   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455872   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.455892   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455904   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:57:24.455916   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.455930   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455967   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.455990   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456008   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456019   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.456031   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:57:24.456052   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456067   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.456078   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.456100   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.456114   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456124   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456144   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456159   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456169   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456191   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456205   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456216   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456229   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456239   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456260   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456304   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.456318   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456331   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456352   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456364   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456372   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:57:24.456379   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:57:24.456386   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456396   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:57:24.456406   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456423   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:57:24.456441   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.456458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:57:24.456472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:57:24.456487   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:57:24.456503   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:57:24.456520   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456540   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:57:24.456560   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:57:24.456573   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:57:24.456584   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:57:24.456626   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:57:24.456639   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:57:24.456647   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:57:24.456657   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:57:24.456665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:57:24.456676   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:57:24.456685   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:57:24.456695   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:57:24.456703   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:57:24.456714   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:57:24.456726   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:57:24.456739   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:57:24.456748   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:57:24.456761   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:57:24.456771   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.456782   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:57:24.456790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:57:24.456811   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:57:24.456821   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:57:24.456832   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:57:24.456845   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:57:24.456853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:57:24.456866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456875   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:57:24.456885   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:57:24.456893   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:57:24.456907   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:57:24.456918   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:57:24.456927   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:57:24.456937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:57:24.456947   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:57:24.456959   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:57:24.456971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456990   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457011   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457023   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:57:24.457032   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:57:24.457044   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:57:24.457054   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:57:24.457067   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:57:24.457074   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:57:24.457083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:57:24.457093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457105   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457112   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:57:24.457125   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:57:24.457133   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:57:24.457145   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:57:24.457154   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:57:24.457168   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:57:24.457178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457192   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457205   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457220   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.457235   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.457247   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.457258   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.457271   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.457284   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.457299   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457310   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.457322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.457333   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.457345   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.457359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.457370   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.457381   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.457396   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457436   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:57:24.457460   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457481   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457500   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:57:24.457515   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457533   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:57:24.457586   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.457604   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.457613   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457633   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457649   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:57:24.457664   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457680   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:57:24.457697   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.457717   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457740   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457763   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457785   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457817   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.457904   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457927   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457976   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457996   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458019   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458034   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458050   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:57:24.458090   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458106   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458116   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458130   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458141   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458158   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458170   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458184   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458198   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458263   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458295   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458316   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458339   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458367   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.458389   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.458409   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458429   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458447   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:57:24.458468   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.458490   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458512   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458529   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458542   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458572   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458587   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458602   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458617   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458632   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458644   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458659   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458674   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458686   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458702   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458717   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458732   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458746   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458762   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458777   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458790   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458824   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458839   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458852   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458865   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458879   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458889   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458911   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458925   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458939   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458952   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458964   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458983   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458998   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459016   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459031   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459048   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459062   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.459090   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459104   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459118   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459132   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459145   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459158   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459174   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459186   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459201   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459215   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459225   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459247   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459261   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459274   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459286   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459302   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459314   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459334   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459352   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459392   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.459418   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.459438   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.459461   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459483   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459502   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459515   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459537   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459552   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459567   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459579   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459599   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459613   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459634   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459649   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459662   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459676   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459693   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459707   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459720   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459741   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459753   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459769   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459782   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459806   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459829   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459845   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459859   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459875   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459897   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459911   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459924   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459938   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459950   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459972   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459987   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460000   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460016   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460036   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460053   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.460094   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.460108   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460124   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.460136   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.460148   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460162   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.460182   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460195   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460206   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460221   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460236   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.460254   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460269   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460283   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460296   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460308   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460321   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460334   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460347   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460367   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460381   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460395   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460414   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460450   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460499   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.513870   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:57:24.513913   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:57:24.542868   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.542904   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:57:24.542974   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:57:24.542992   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:57:24.543020   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.543037   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:57:24.543067   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543085   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:57:24.543199   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:57:24.543237   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:57:24.543258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:57:24.543276   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:57:24.543291   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:57:24.543306   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:57:24.543327   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:57:24.543344   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:57:24.543365   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:57:24.543380   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:57:24.543393   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:57:24.543419   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:57:24.543437   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:57:24.543464   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:57:24.543483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:57:24.543499   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:57:24.543511   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:57:24.543561   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:57:24.543585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:57:24.543605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:57:24.543623   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:57:24.543659   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:57:24.543680   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:57:24.543701   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:57:24.543722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.543744   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:57:24.543770   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543821   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:57:24.543840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:57:24.543865   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:57:24.543886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:57:24.543908   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:57:24.543927   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:57:24.543945   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.543962   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:57:24.543980   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:57:24.544010   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:57:24.544031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:57:24.544065   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:57:24.544084   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:57:24.544103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:57:24.544120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:57:24.544157   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544176   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544193   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:57:24.544213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544224   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:57:24.544264   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544298   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:57:24.544314   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:57:24.544331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544345   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:57:24.544364   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:57:24.544381   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:57:24.544405   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.544430   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544465   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544517   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544537   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.544554   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:57:24.544575   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544595   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544623   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544641   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:57:24.544662   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544683   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544711   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544730   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544767   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544828   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.552509   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:57:24.552540   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:57:24.575005   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:57:24.575036   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:57:24.597505   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597545   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597560   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597577   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597596   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.597610   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:57:24.597628   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:57:24.597642   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597654   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.597667   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:57:24.597682   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.597705   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.597733   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.597753   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.597765   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.597773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:57:24.597803   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.597814   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:57:24.597825   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.597834   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.597841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:57:24.597848   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597856   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.597866   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.597874   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.597883   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.597900   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.597909   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.597916   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597925   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597936   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597944   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597953   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:57:24.597960   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.597973   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.597981   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.597991   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.597999   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598010   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598017   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598029   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598041   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598054   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598067   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598074   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598084   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598095   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:57:24.598104   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598111   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598117   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598126   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598132   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:57:24.598141   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598154   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598174   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598186   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598196   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598205   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598224   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598235   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598246   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598256   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598264   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598273   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598289   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598297   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598306   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598314   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598320   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.598327   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598334   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598345   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.598354   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.598365   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.598373   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.598381   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.598389   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.598400   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.598415   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.598431   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598447   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598463   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598476   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598492   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598503   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:57:24.598513   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598522   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598531   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598538   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598545   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:57:24.598555   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598578   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598591   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598602   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598613   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598621   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598642   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598653   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598664   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598674   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598683   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598693   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598701   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598716   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598724   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598732   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598760   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598774   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598787   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598815   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598832   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:57:24.598845   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598860   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598873   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598889   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598904   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598918   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598933   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598946   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598958   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598973   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598980   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598989   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598999   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:57:24.599008   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.599015   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.599022   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.599030   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:57:24.599036   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.599043   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:57:24.599054   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.599065   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.599077   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.599088   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.599099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.599107   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:57:24.599120   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599138   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599151   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599168   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599185   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599198   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599228   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599257   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599270   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599285   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599297   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599319   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.599331   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:57:24.599346   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:57:24.599359   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:57:24.599376   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:57:24.599387   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:57:24.599405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:57:24.599423   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:57:24.599452   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:57:24.599472   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:57:24.599489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:57:24.599503   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:57:24.599517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:57:24.599529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:57:24.599544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:57:24.599559   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:57:24.599572   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:57:24.599587   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:57:24.599602   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:57:24.599615   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:57:24.599631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:57:24.599644   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:57:24.599654   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:57:24.599664   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.599673   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.599682   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.599692   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.599700   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.599710   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.599747   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.599756   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.599772   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:57:24.599782   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599789   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.599806   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599814   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:57:24.599822   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599830   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.599841   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.599849   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.599860   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.599868   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.599879   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.599886   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.599896   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.599907   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.599914   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.599922   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599934   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599953   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599970   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599983   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600000   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600017   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600034   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:57:24.600049   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600063   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600079   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600092   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600107   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600121   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600137   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600152   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600164   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600177   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600190   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600207   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:57:24.600223   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600235   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:57:24.600247   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:57:24.600261   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600276   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600288   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:57:24.600304   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600317   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600331   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600345   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600357   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600373   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600386   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600403   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.600423   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:57:24.600448   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600472   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600490   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.619075   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:57:24.619123   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:58:24.700496   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:58:24.700542   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.081407425s)
	W1229 06:58:24.700578   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:58:24.700591   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:58:24.700607   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:58:24.726206   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726238   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:58:24.726283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:58:24.726296   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:58:24.726311   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726321   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:58:24.726342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726358   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:58:24.726438   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:58:24.726461   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:58:24.726472   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:58:24.726483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:58:24.726492   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:58:24.726503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:58:24.726517   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:58:24.726528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:58:24.726540   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:58:24.726552   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:58:24.726560   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:58:24.726577   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:58:24.726590   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:58:24.726607   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:58:24.726618   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:58:24.726629   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:58:24.726636   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:58:24.726647   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:58:24.726657   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:58:24.726670   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:58:24.726680   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:58:24.726698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:58:24.726711   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:58:24.726723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:58:24.726735   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:58:24.726750   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:58:24.726765   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:58:24.726784   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726826   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:58:24.726839   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:58:24.726848   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:58:24.726858   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:58:24.726870   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:58:24.726883   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:58:24.726896   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:58:24.726906   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:58:24.726922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:58:24.726935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:58:24.726947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:58:24.726956   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:58:24.726969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:58:24.726982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.726997   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:58:24.727009   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727020   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.727029   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:58:24.727039   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727056   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:58:24.727064   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:58:24.727089   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:58:24.727100   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727109   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:58:24.727132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:58:24.733042   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:58:24.733064   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:58:24.755028   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.755231   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:58:24.755256   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:58:24.776073   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:58:24.776109   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.776120   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:58:24.776135   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:58:24.776154   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:58:24.776162   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:58:24.776180   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:58:24.776188   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:58:24.776195   17440 command_runner.go:130] !  >
	I1229 06:58:24.776212   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:58:24.776224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:58:24.776249   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:58:24.776257   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:58:24.776266   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.776282   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:58:24.776296   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:58:24.776307   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:58:24.776328   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:58:24.776350   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:58:24.776366   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:58:24.776376   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:58:24.776388   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:58:24.776404   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:58:24.776420   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:58:24.776439   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:58:24.776453   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:58:24.778558   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:24.778595   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:24.793983   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:24.794025   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:24.794040   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:24.794054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:24.794069   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:24.794079   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:24.794096   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:24.794106   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:24.794117   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:24.794125   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:24.794136   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:24.794146   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:24.794160   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794167   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:24.794178   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:24.794186   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794196   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794207   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794215   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:24.794221   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:24.794229   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:24.794241   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:24.794252   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:24.794260   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.794271   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.795355   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:58:24.795387   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:58:24.820602   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.820635   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:58:24.820646   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:58:24.820657   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:58:24.820665   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:58:24.820672   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.820681   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:58:24.820692   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.820698   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:58:24.820705   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.822450   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:58:24.822473   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:58:24.844122   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.844156   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:58:24.844170   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.844184   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:58:24.844201   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:24.844210   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:58:24.844218   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.845429   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:58:24.845453   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:58:24.867566   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:58:24.867597   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:58:24.867607   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:58:24.867615   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867622   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867633   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:58:24.867653   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867681   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:58:24.867694   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867704   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867719   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867734   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867750   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867763   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867817   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867836   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867848   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867859   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867871   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867883   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867891   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867901   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867914   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867926   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867944   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867956   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867972   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867982   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867997   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868013   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868028   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868048   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868063   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868071   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868081   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868098   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868111   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868127   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868140   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868153   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868164   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868177   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868192   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868207   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868221   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868236   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868247   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868258   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868275   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868290   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868304   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868320   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868332   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868342   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868358   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868373   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868385   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868400   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868414   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868425   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868438   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.872821   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872842   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:58:24.872901   17440 out.go:285] X Problems detected in kube-apiserver [b206d555ad19]:
	W1229 06:58:24.872915   17440 out.go:285]   E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:58:24.872919   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872923   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:58:34.875381   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:58:39.877679   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:39.877779   17440 kubeadm.go:602] duration metric: took 4m48.388076341s to restartPrimaryControlPlane
	W1229 06:58:39.877879   17440 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1229 06:58:39.877946   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:39.930050   17440 command_runner.go:130] ! W1229 06:58:39.921577    8187 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:49.935089   17440 command_runner.go:130] ! W1229 06:58:49.926653    8187 reset.go:141] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:49.935131   17440 command_runner.go:130] ! W1229 06:58:49.926754    8187 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:50.998307   17440 command_runner.go:130] > [reset] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I1229 06:58:50.998341   17440 command_runner.go:130] > [reset] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
	I1229 06:58:50.998348   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:50.998357   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/minikube/etcd
	I1229 06:58:50.998366   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:50.998372   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:50.998386   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:50.998407   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:50.998417   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:50.998428   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:50.998434   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:50.998442   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:50.998458   17440 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (11.120499642s)
	I1229 06:58:50.998527   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.015635   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:58:51.028198   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.040741   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.040780   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.040811   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.040826   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040865   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040877   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.040925   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.051673   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052090   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052155   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.064755   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.076455   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076517   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076577   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.088881   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.099253   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099652   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099710   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.111487   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.122532   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122905   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122972   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.135143   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.355420   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355450   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355543   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.355556   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.355615   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355625   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355790   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.355837   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.356251   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356265   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356317   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.356324   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.357454   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357471   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357544   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.357561   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	W1229 06:58:51.357680   17440 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 06:58:51.357753   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:51.401004   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.401036   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I1229 06:58:51.401047   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:51.408535   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:51.413813   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:51.415092   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:51.415117   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:51.415128   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:51.415137   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:51.415145   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:51.415645   17440 command_runner.go:130] ! W1229 06:58:51.391426    8625 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:51.415670   17440 command_runner.go:130] ! W1229 06:58:51.392518    8625 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:51.415739   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.432316   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.444836   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.444860   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.444867   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.444874   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445417   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445435   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.445485   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.457038   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457099   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457146   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.469980   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.480965   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481435   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481498   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.493408   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.504342   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504404   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504468   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.516567   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.526975   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527475   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527532   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.539365   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.587038   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587068   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587108   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.587113   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.738880   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738912   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738963   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.738975   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.739029   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739038   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739157   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739166   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739271   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739294   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739348   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739355   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739406   17440 kubeadm.go:403] duration metric: took 5m0.289116828s to StartCluster
	I1229 06:58:51.739455   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 06:58:51.739507   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 06:58:51.776396   17440 cri.go:96] found id: ""
	I1229 06:58:51.776420   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.776428   17440 logs.go:284] No container was found matching "kube-apiserver"
	I1229 06:58:51.776434   17440 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 06:58:51.776522   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 06:58:51.808533   17440 cri.go:96] found id: ""
	I1229 06:58:51.808556   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.808563   17440 logs.go:284] No container was found matching "etcd"
	I1229 06:58:51.808570   17440 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 06:58:51.808625   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 06:58:51.841860   17440 cri.go:96] found id: ""
	I1229 06:58:51.841887   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.841894   17440 logs.go:284] No container was found matching "coredns"
	I1229 06:58:51.841900   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 06:58:51.841955   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 06:58:51.875485   17440 cri.go:96] found id: ""
	I1229 06:58:51.875512   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.875520   17440 logs.go:284] No container was found matching "kube-scheduler"
	I1229 06:58:51.875526   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 06:58:51.875576   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 06:58:51.909661   17440 cri.go:96] found id: ""
	I1229 06:58:51.909699   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.909712   17440 logs.go:284] No container was found matching "kube-proxy"
	I1229 06:58:51.909720   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 06:58:51.909790   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 06:58:51.943557   17440 cri.go:96] found id: ""
	I1229 06:58:51.943594   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.943607   17440 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 06:58:51.943616   17440 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 06:58:51.943685   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 06:58:51.979189   17440 cri.go:96] found id: ""
	I1229 06:58:51.979219   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.979228   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:58:51.979234   17440 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 06:58:51.979285   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 06:58:52.013436   17440 cri.go:96] found id: ""
	I1229 06:58:52.013472   17440 logs.go:282] 0 containers: []
	W1229 06:58:52.013482   17440 logs.go:284] No container was found matching "storage-provisioner"
	I1229 06:58:52.013494   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:58:52.013507   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:58:52.030384   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.030429   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.030454   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030506   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030530   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030550   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030574   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030601   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030643   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:58:52.030670   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030694   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:58:52.030721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.030757   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:58:52.030787   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030826   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030853   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030893   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.030921   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.030943   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:58:52.030981   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.031015   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.031053   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:58:52.031087   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:58:52.031117   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:58:52.031146   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031223   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:58:52.031253   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031281   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031311   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031347   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031383   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031422   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031445   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.031467   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031491   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.031516   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031538   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.031562   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031584   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.031606   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031628   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031651   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031673   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031695   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.031717   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031763   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031786   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:58:52.031824   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031855   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.031894   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031949   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.031981   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.032005   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.032025   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:58:52.032048   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032069   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.032093   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032112   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032150   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032170   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:58:52.032192   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.032214   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032234   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032269   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032290   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032314   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.032335   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:58:52.032371   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032395   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032414   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032452   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032473   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032495   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032530   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032552   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032573   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032608   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032631   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032655   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032676   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032696   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032735   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032819   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.032845   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032864   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032899   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032919   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.032935   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.032948   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.032960   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.032981   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:58:52.032995   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.033012   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:58:52.033029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:52.033042   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:58:52.033062   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:58:52.033080   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:58:52.033101   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:58:52.033120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.033138   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:58:52.033166   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:58:52.033187   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:58:52.033206   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:58:52.033274   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:58:52.033294   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:58:52.033309   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:58:52.033326   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:58:52.033343   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:58:52.033359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:58:52.033378   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:58:52.033398   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:58:52.033413   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:58:52.033431   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:58:52.033453   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:58:52.033476   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:58:52.033492   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:58:52.033507   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:58:52.033526   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033542   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:58:52.033559   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:58:52.033609   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:58:52.033625   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:58:52.033642   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:58:52.033665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:58:52.033681   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:58:52.033700   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033718   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:58:52.033734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:58:52.033751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:58:52.033776   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:58:52.033808   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:58:52.033826   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:58:52.033840   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:58:52.033855   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:58:52.033878   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:58:52.033905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033974   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:58:52.034010   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:58:52.034030   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:58:52.034050   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:58:52.034084   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:58:52.034099   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:58:52.034116   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:58:52.034134   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034152   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034167   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:58:52.034186   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:58:52.034203   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:58:52.034224   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:58:52.034241   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:58:52.034265   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:58:52.034286   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034332   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034358   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.034380   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.034404   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.034427   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.034450   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.034472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.034499   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.034544   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.034566   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.034588   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.034611   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.034633   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.034655   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:58:52.034678   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034697   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.034724   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:58:52.034749   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034771   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034819   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:58:52.034843   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034873   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:58:52.034936   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.034963   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.034993   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035018   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035049   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:58:52.035071   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035099   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:58:52.035126   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.035159   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035194   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035263   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035299   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.035333   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035368   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035408   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035445   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035477   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035512   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035534   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035563   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:58:52.035631   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035658   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035677   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035699   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035720   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035749   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035771   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035814   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035838   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035902   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035927   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035947   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035978   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036010   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.036038   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.036061   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036082   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036102   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:58:52.036121   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.036141   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036165   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036190   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036212   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036251   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036275   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036299   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036323   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036345   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036369   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036393   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036418   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036441   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036464   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036488   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036511   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036536   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036561   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036584   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036606   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036642   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036664   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036687   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036711   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036734   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036754   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036806   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036895   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036922   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036945   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036973   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037009   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037032   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037052   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037098   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037122   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037144   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.037168   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037189   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037212   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037235   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037254   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037278   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037303   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037325   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037348   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037372   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037392   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037424   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037449   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037472   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037497   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037518   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037539   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037574   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037604   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.037669   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.037694   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.037713   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.037734   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.037760   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037784   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037816   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037851   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037875   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037897   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037917   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037950   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037981   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038011   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038035   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038059   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038079   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038102   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038125   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038147   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038182   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038203   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038223   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038243   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038260   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038297   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038321   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038344   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038365   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038401   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038423   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038449   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038471   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038491   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038526   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038549   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038585   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038608   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038643   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038670   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.038735   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.038758   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038785   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.038817   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.038842   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038869   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.038900   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038922   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038946   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038977   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039003   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.039034   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039059   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039082   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039102   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039126   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039149   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039171   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039191   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039227   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039252   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039275   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039295   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039330   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039396   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039419   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.302837    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039444   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.341968    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039468   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342033    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039488   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: I1229 06:57:30.342050    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039523   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342233    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039550   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.608375    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.039576   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186377    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039598   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186459    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.039675   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188187    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039700   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188267    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.039715   17440 command_runner.go:130] > Dec 29 06:57:37 functional-695625 kubelet[6517]: I1229 06:57:37.010219    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.039749   17440 command_runner.go:130] > Dec 29 06:57:38 functional-695625 kubelet[6517]: E1229 06:57:38.741770    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039773   17440 command_runner.go:130] > Dec 29 06:57:40 functional-695625 kubelet[6517]: E1229 06:57:40.303258    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039808   17440 command_runner.go:130] > Dec 29 06:57:50 functional-695625 kubelet[6517]: E1229 06:57:50.304120    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039837   17440 command_runner.go:130] > Dec 29 06:57:55 functional-695625 kubelet[6517]: E1229 06:57:55.743031    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.039903   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 kubelet[6517]: E1229 06:57:58.103052    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.272240811 +0000 UTC m=+0.287990191,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039929   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.304627    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039954   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432518    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.039991   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432667    6517 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)
	I1229 06:58:52.040014   17440 command_runner.go:130] > Dec 29 06:58:10 functional-695625 kubelet[6517]: E1229 06:58:10.305485    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040037   17440 command_runner.go:130] > Dec 29 06:58:11 functional-695625 kubelet[6517]: E1229 06:58:11.012407    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.040068   17440 command_runner.go:130] > Dec 29 06:58:12 functional-695625 kubelet[6517]: E1229 06:58:12.743824    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040086   17440 command_runner.go:130] > Dec 29 06:58:18 functional-695625 kubelet[6517]: I1229 06:58:18.014210    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.040107   17440 command_runner.go:130] > Dec 29 06:58:20 functional-695625 kubelet[6517]: E1229 06:58:20.306630    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040127   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186554    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040149   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186719    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.040176   17440 command_runner.go:130] > Dec 29 06:58:29 functional-695625 kubelet[6517]: E1229 06:58:29.745697    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040195   17440 command_runner.go:130] > Dec 29 06:58:30 functional-695625 kubelet[6517]: E1229 06:58:30.307319    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040256   17440 command_runner.go:130] > Dec 29 06:58:32 functional-695625 kubelet[6517]: E1229 06:58:32.105206    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.286010652 +0000 UTC m=+0.301760032,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.040279   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184790    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040300   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184918    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040319   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: I1229 06:58:39.184949    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040354   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040377   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040397   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.040413   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040433   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040455   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040477   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040498   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040520   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040538   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040576   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040596   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040619   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040640   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040658   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040692   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040711   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040729   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040741   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040764   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040784   17440 command_runner.go:130] > Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040807   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.040815   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.040821   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.040830   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	I1229 06:58:52.093067   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:52.093106   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:52.108863   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:52.108898   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:52.108912   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:52.108925   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:52.108937   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:52.108945   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:52.108951   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:52.108957   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:52.108962   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:52.108971   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:52.108975   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:52.108980   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:52.108992   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.108997   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:52.109006   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:52.109011   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.109021   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109031   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109036   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:52.109043   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:52.109048   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:52.109055   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:52.109062   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:52.109067   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109072   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109080   17440 command_runner.go:130] > [Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109088   17440 command_runner.go:130] > [  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109931   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:58:52.109946   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:59:52.193646   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:59:52.193695   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.083736259s)
	W1229 06:59:52.193730   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:59:52.193743   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:59:52.193757   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:59:52.211424   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211464   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.211503   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.211519   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.211538   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:59:52.211555   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:59:52.211569   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:59:52.211587   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211601   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.211612   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:59:52.211630   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.211652   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.211672   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.211696   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.211714   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.211730   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:59:52.211773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.211790   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:59:52.211824   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.211841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.211855   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:59:52.211871   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211884   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.211899   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.211913   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.211926   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.211948   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.211959   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.211970   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211984   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212011   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212025   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212039   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:59:52.212064   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212079   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212093   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212108   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212125   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212139   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212172   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.212192   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.212215   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212237   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212252   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212266   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212285   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:59:52.212301   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.212316   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.212331   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.212341   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.212357   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:59:52.212372   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.212392   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.212423   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.212444   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.212461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.212477   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:59:52.212512   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.212529   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:59:52.212547   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.212562   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.212577   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.212594   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.212612   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.212628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.212643   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.212656   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.212671   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212684   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.212699   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212714   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212732   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212751   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212767   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212783   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.212808   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212827   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212844   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212864   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212899   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212916   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212932   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212949   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212974   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:59:52.212995   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213006   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213020   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213033   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213055   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:59:52.213073   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213115   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213135   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213153   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213169   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213204   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.213221   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:59:52.213242   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.213258   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.213275   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.213291   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.213308   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.213321   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.213334   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.213348   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.213387   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213414   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213440   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213465   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213486   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:59:52.213507   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213528   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213549   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213573   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213595   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213616   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213637   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213655   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213675   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.213697   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.213709   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.213724   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.213735   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:59:52.213749   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213759   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213774   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213786   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:59:52.213809   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213822   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:59:52.213839   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213856   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213874   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213891   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213907   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213920   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213942   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213963   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213985   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214006   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214028   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214055   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214078   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214122   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214144   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214166   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214190   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214211   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214242   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.214258   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:59:52.214283   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:59:52.214298   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:59:52.214323   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:59:52.214341   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:59:52.214365   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:59:52.214380   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:59:52.214405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:59:52.214421   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:59:52.214447   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:59:52.214464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:59:52.214489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:59:52.214506   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:59:52.214531   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:59:52.214553   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:59:52.214576   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:59:52.214600   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:59:52.214623   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:59:52.214646   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:59:52.214668   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:59:52.214690   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:59:52.214703   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:59:52.214721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.214735   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.214748   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.214762   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.214775   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.214788   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.215123   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.215148   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.215180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:59:52.215194   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.215222   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215233   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:59:52.215247   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215265   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.215283   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.215299   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.215312   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.215324   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.215340   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.215355   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.215372   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.215389   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.215401   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.215409   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215430   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215454   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215478   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215500   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215517   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215532   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215549   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:59:52.215565   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215578   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215593   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215606   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215622   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215643   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215667   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215688   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215712   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215762   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215839   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:59:52.215868   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215888   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:59:52.215912   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:59:52.215937   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215959   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215979   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:59:52.216007   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216027   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216051   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216067   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216084   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216097   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216112   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216128   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216141   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216157   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216171   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216195   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216222   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216243   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216263   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216276   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216289   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 dockerd[4014]: time="2025-12-29T06:58:43.458641345Z" level=info msg="ignoring event" container=07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216304   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.011072219Z" level=info msg="ignoring event" container=173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216318   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.102126666Z" level=info msg="ignoring event" container=6b7711ee25a2df71f8c7d296f7186875ebd6ab978a71d33f177de0cc3055645b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216331   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.266578298Z" level=info msg="ignoring event" container=a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216346   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.365376654Z" level=info msg="ignoring event" container=fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216365   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.452640794Z" level=info msg="ignoring event" container=4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216380   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.557330204Z" level=info msg="ignoring event" container=d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216392   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.666151542Z" level=info msg="ignoring event" container=0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216409   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.751481082Z" level=info msg="ignoring event" container=f48fc04e347519b276e239ee9a6b0b8e093862313e46174a1815efae670eec9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216427   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	I1229 06:59:52.216440   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	I1229 06:59:52.216455   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216467   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216484   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	I1229 06:59:52.216495   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	I1229 06:59:52.216512   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	I1229 06:59:52.216525   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	I1229 06:59:52.216542   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:59:52.216554   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	I1229 06:59:52.216568   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:59:52.216582   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	I1229 06:59:52.216596   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216611   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216628   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	I1229 06:59:52.216642   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	I1229 06:59:52.216660   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:59:52.216673   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	I1229 06:59:52.238629   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:59:52.238668   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:59:52.287732   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	W1229 06:59:52.290016   17440 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 06:59:52.290080   17440 out.go:285] * 
	W1229 06:59:52.290145   17440 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.290156   17440 out.go:285] * 
	W1229 06:59:52.290452   17440 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:59:52.293734   17440 out.go:203] 
	W1229 06:59:52.295449   17440 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.295482   17440 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 06:59:52.295500   17440 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 06:59:52.296904   17440 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 07:03:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:03:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 07:04:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:04:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> kernel <==
	 07:06:21 up 14 min,  0 users,  load average: 0.04, 0.12, 0.13
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.796002881s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (153.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (153.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-695625 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-695625 get pods: exit status 1 (1m0.117498461s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-695625 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.743966821s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
E1229 07:08:43.100411   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m1.094508009s)
helpers_test.go:261: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                       ARGS                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-039815 --log_dir /tmp/nospam-039815 pause                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:52 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                   │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ delete  │ -p nospam-039815                                                                  │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ start   │ -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:53 UTC │
	│ start   │ -p functional-695625 --alsologtostderr -v=8                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:53 UTC │                     │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.1                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:03 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.3                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:latest                          │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add minikube-local-cache-test:functional-695625           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache delete minikube-local-cache-test:functional-695625        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                  │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ list                                                                              │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl images                                          │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo docker rmi registry.k8s.io/pause:latest                │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	│ cache   │ functional-695625 cache reload                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                  │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                               │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ kubectl │ functional-695625 kubectl -- --context functional-695625 get pods                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:53:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:53:22.250786   17440 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:53:22.251073   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251082   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:53:22.251087   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:53:22.251322   17440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 06:53:22.251807   17440 out.go:368] Setting JSON to false
	I1229 06:53:22.252599   17440 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2152,"bootTime":1766989050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:53:22.252669   17440 start.go:143] virtualization: kvm guest
	I1229 06:53:22.254996   17440 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:53:22.256543   17440 notify.go:221] Checking for updates...
	I1229 06:53:22.256551   17440 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:53:22.258115   17440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:53:22.259464   17440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:22.260823   17440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:53:22.262461   17440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:53:22.263830   17440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:53:22.265499   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.265604   17440 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:53:22.301877   17440 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 06:53:22.303062   17440 start.go:309] selected driver: kvm2
	I1229 06:53:22.303099   17440 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.303255   17440 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:53:22.304469   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:22.304541   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:22.304607   17440 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:22.304716   17440 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:53:22.306617   17440 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 06:53:22.307989   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:22.308028   17440 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 06:53:22.308037   17440 cache.go:65] Caching tarball of preloaded images
	I1229 06:53:22.308172   17440 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 06:53:22.308185   17440 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 06:53:22.308288   17440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 06:53:22.308499   17440 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 06:53:22.308543   17440 start.go:364] duration metric: took 25.28µs to acquireMachinesLock for "functional-695625"
	I1229 06:53:22.308555   17440 start.go:96] Skipping create...Using existing machine configuration
	I1229 06:53:22.308560   17440 fix.go:54] fixHost starting: 
	I1229 06:53:22.310738   17440 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 06:53:22.310765   17440 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 06:53:22.313927   17440 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 06:53:22.313960   17440 machine.go:94] provisionDockerMachine start ...
	I1229 06:53:22.317184   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317690   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.317748   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.317941   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.318146   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.318156   17440 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 06:53:22.424049   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.424102   17440 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 06:53:22.427148   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427685   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.427715   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.427957   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.428261   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.428280   17440 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 06:53:22.552563   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 06:53:22.555422   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.555807   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.555834   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.556061   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.556278   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.556302   17440 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 06:53:22.661438   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:22.661470   17440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 06:53:22.661505   17440 buildroot.go:174] setting up certificates
	I1229 06:53:22.661529   17440 provision.go:84] configureAuth start
	I1229 06:53:22.664985   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.665439   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.665459   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.667758   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668124   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.668145   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.668257   17440 provision.go:143] copyHostCerts
	I1229 06:53:22.668280   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668308   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 06:53:22.668317   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 06:53:22.668383   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 06:53:22.668476   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668505   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 06:53:22.668512   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 06:53:22.668541   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 06:53:22.668582   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668598   17440 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 06:53:22.668603   17440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 06:53:22.668632   17440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 06:53:22.668676   17440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 06:53:22.746489   17440 provision.go:177] copyRemoteCerts
	I1229 06:53:22.746545   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 06:53:22.749128   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749596   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.749616   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.749757   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:22.836885   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 06:53:22.836959   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 06:53:22.872390   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 06:53:22.872481   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 06:53:22.908829   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 06:53:22.908896   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 06:53:22.941014   17440 provision.go:87] duration metric: took 279.457536ms to configureAuth
	I1229 06:53:22.941053   17440 buildroot.go:189] setting minikube options for container-runtime
	I1229 06:53:22.941277   17440 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:53:22.944375   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.944857   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:22.944916   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:22.945128   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:22.945387   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:22.945402   17440 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 06:53:23.052106   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 06:53:23.052136   17440 buildroot.go:70] root file system type: tmpfs
	I1229 06:53:23.052304   17440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 06:53:23.055887   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056416   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.056446   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.056629   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.056893   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.056961   17440 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 06:53:23.183096   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 06:53:23.186465   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.186943   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.187006   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.187227   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.187475   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.187494   17440 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 06:53:23.306011   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:53:23.306077   17440 machine.go:97] duration metric: took 992.109676ms to provisionDockerMachine
	I1229 06:53:23.306099   17440 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 06:53:23.306114   17440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 06:53:23.306201   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 06:53:23.309537   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.309944   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.309967   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.310122   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.393657   17440 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 06:53:23.398689   17440 command_runner.go:130] > NAME=Buildroot
	I1229 06:53:23.398723   17440 command_runner.go:130] > VERSION=2025.02
	I1229 06:53:23.398731   17440 command_runner.go:130] > ID=buildroot
	I1229 06:53:23.398737   17440 command_runner.go:130] > VERSION_ID=2025.02
	I1229 06:53:23.398745   17440 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1229 06:53:23.398791   17440 info.go:137] Remote host: Buildroot 2025.02
	I1229 06:53:23.398821   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 06:53:23.398897   17440 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 06:53:23.398981   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 06:53:23.398993   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 06:53:23.399068   17440 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 06:53:23.399075   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> /etc/test/nested/copy/13486/hosts
	I1229 06:53:23.399114   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 06:53:23.412045   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:23.445238   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 06:53:23.479048   17440 start.go:296] duration metric: took 172.930561ms for postStartSetup
	I1229 06:53:23.479099   17440 fix.go:56] duration metric: took 1.170538464s for fixHost
	I1229 06:53:23.482307   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.482761   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.482808   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.483049   17440 main.go:144] libmachine: Using SSH client type: native
	I1229 06:53:23.483313   17440 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 06:53:23.483327   17440 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 06:53:23.586553   17440 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766991203.580410695
	
	I1229 06:53:23.586572   17440 fix.go:216] guest clock: 1766991203.580410695
	I1229 06:53:23.586579   17440 fix.go:229] Guest: 2025-12-29 06:53:23.580410695 +0000 UTC Remote: 2025-12-29 06:53:23.479103806 +0000 UTC m=+1.278853461 (delta=101.306889ms)
	I1229 06:53:23.586594   17440 fix.go:200] guest clock delta is within tolerance: 101.306889ms
	I1229 06:53:23.586598   17440 start.go:83] releasing machines lock for "functional-695625", held for 1.278049275s
	I1229 06:53:23.590004   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.590438   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.590463   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.591074   17440 ssh_runner.go:195] Run: cat /version.json
	I1229 06:53:23.591186   17440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 06:53:23.594362   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594454   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594831   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.594868   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.594954   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:23.595021   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:23.595083   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.595278   17440 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 06:53:23.692873   17440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1229 06:53:23.692948   17440 command_runner.go:130] > {"iso_version": "v1.37.0-1766979747-22353", "kicbase_version": "v0.0.48-1766884053-22351", "minikube_version": "v1.37.0", "commit": "f5189b2bdbb6990e595e25e06a017f8901d29fa8"}
	I1229 06:53:23.693063   17440 ssh_runner.go:195] Run: systemctl --version
	I1229 06:53:23.700357   17440 command_runner.go:130] > systemd 256 (256.7)
	I1229 06:53:23.700393   17440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1229 06:53:23.700501   17440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 06:53:23.707230   17440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1229 06:53:23.707369   17440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 06:53:23.707433   17440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 06:53:23.719189   17440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 06:53:23.719220   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:23.719246   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:23.719351   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:23.744860   17440 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1229 06:53:23.744940   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 06:53:23.758548   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 06:53:23.773051   17440 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 06:53:23.773122   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 06:53:23.786753   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.800393   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 06:53:23.813395   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 06:53:23.826600   17440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 06:53:23.840992   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 06:53:23.854488   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 06:53:23.869084   17440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 06:53:23.882690   17440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 06:53:23.894430   17440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1229 06:53:23.894542   17440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 06:53:23.912444   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:24.139583   17440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 06:53:24.191402   17440 start.go:496] detecting cgroup driver to use...
	I1229 06:53:24.191457   17440 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 06:53:24.191521   17440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 06:53:24.217581   17440 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1229 06:53:24.217604   17440 command_runner.go:130] > [Unit]
	I1229 06:53:24.217609   17440 command_runner.go:130] > Description=Docker Application Container Engine
	I1229 06:53:24.217615   17440 command_runner.go:130] > Documentation=https://docs.docker.com
	I1229 06:53:24.217626   17440 command_runner.go:130] > After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1229 06:53:24.217631   17440 command_runner.go:130] > Wants=network-online.target containerd.service
	I1229 06:53:24.217635   17440 command_runner.go:130] > Requires=docker.socket
	I1229 06:53:24.217638   17440 command_runner.go:130] > StartLimitBurst=3
	I1229 06:53:24.217642   17440 command_runner.go:130] > StartLimitIntervalSec=60
	I1229 06:53:24.217646   17440 command_runner.go:130] > [Service]
	I1229 06:53:24.217649   17440 command_runner.go:130] > Type=notify
	I1229 06:53:24.217653   17440 command_runner.go:130] > Restart=always
	I1229 06:53:24.217660   17440 command_runner.go:130] > ExecStart=
	I1229 06:53:24.217694   17440 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1229 06:53:24.217710   17440 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1229 06:53:24.217748   17440 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1229 06:53:24.217761   17440 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1229 06:53:24.217767   17440 command_runner.go:130] > LimitNOFILE=infinity
	I1229 06:53:24.217782   17440 command_runner.go:130] > LimitNPROC=infinity
	I1229 06:53:24.217790   17440 command_runner.go:130] > LimitCORE=infinity
	I1229 06:53:24.217818   17440 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1229 06:53:24.217828   17440 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1229 06:53:24.217833   17440 command_runner.go:130] > TasksMax=infinity
	I1229 06:53:24.217840   17440 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1229 06:53:24.217847   17440 command_runner.go:130] > Delegate=yes
	I1229 06:53:24.217855   17440 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1229 06:53:24.217864   17440 command_runner.go:130] > KillMode=process
	I1229 06:53:24.217871   17440 command_runner.go:130] > OOMScoreAdjust=-500
	I1229 06:53:24.217881   17440 command_runner.go:130] > [Install]
	I1229 06:53:24.217896   17440 command_runner.go:130] > WantedBy=multi-user.target
	I1229 06:53:24.217973   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.255457   17440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 06:53:24.293449   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:53:24.313141   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 06:53:24.332090   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:53:24.359168   17440 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1229 06:53:24.359453   17440 ssh_runner.go:195] Run: which cri-dockerd
	I1229 06:53:24.364136   17440 command_runner.go:130] > /usr/bin/cri-dockerd
	I1229 06:53:24.364255   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 06:53:24.377342   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 06:53:24.400807   17440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 06:53:24.632265   17440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 06:53:24.860401   17440 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 06:53:24.860544   17440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 06:53:24.885002   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 06:53:24.902479   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:25.138419   17440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 06:53:48.075078   17440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.936617903s)
	I1229 06:53:48.075181   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 06:53:48.109404   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 06:53:48.160259   17440 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 06:53:48.213352   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:48.231311   17440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 06:53:48.408709   17440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 06:53:48.584722   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.754219   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 06:53:48.798068   17440 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 06:53:48.815248   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:48.983637   17440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 06:53:49.117354   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 06:53:49.139900   17440 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 06:53:49.139985   17440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 06:53:49.146868   17440 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1229 06:53:49.146900   17440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1229 06:53:49.146910   17440 command_runner.go:130] > Device: 0,23	Inode: 2092        Links: 1
	I1229 06:53:49.146918   17440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1229 06:53:49.146926   17440 command_runner.go:130] > Access: 2025-12-29 06:53:49.121969518 +0000
	I1229 06:53:49.146933   17440 command_runner.go:130] > Modify: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146940   17440 command_runner.go:130] > Change: 2025-12-29 06:53:49.012958222 +0000
	I1229 06:53:49.146947   17440 command_runner.go:130] >  Birth: 2025-12-29 06:53:48.995956445 +0000
	I1229 06:53:49.146986   17440 start.go:574] Will wait 60s for crictl version
	I1229 06:53:49.147040   17440 ssh_runner.go:195] Run: which crictl
	I1229 06:53:49.152717   17440 command_runner.go:130] > /usr/bin/crictl
	I1229 06:53:49.152823   17440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 06:53:49.184154   17440 command_runner.go:130] > Version:  0.1.0
	I1229 06:53:49.184179   17440 command_runner.go:130] > RuntimeName:  docker
	I1229 06:53:49.184183   17440 command_runner.go:130] > RuntimeVersion:  28.5.2
	I1229 06:53:49.184188   17440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1229 06:53:49.184211   17440 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 06:53:49.184266   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.212414   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.213969   17440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 06:53:49.257526   17440 command_runner.go:130] > 28.5.2
	I1229 06:53:49.262261   17440 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 06:53:49.266577   17440 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267255   17440 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 06:53:49.267298   17440 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 06:53:49.267633   17440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 06:53:49.286547   17440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1229 06:53:49.286686   17440 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 06:53:49.286896   17440 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 06:53:49.286965   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.324994   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.325029   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.325037   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.325045   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.325052   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.325060   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.325067   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.325074   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.325113   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.325127   17440 docker.go:624] Images already preloaded, skipping extraction
	I1229 06:53:49.325191   17440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 06:53:49.352256   17440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0
	I1229 06:53:49.352294   17440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0
	I1229 06:53:49.352301   17440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0
	I1229 06:53:49.352309   17440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 06:53:49.352315   17440 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1229 06:53:49.352323   17440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1229 06:53:49.352349   17440 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1229 06:53:49.352361   17440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 06:53:49.352398   17440 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 06:53:49.352412   17440 cache_images.go:86] Images are preloaded, skipping loading
	I1229 06:53:49.352427   17440 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 06:53:49.352542   17440 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 06:53:49.352611   17440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 06:53:49.466471   17440 command_runner.go:130] > systemd
	I1229 06:53:49.469039   17440 cni.go:84] Creating CNI manager for ""
	I1229 06:53:49.469084   17440 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:53:49.469108   17440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 06:53:49.469137   17440 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 06:53:49.469275   17440 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 06:53:49.469338   17440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 06:53:49.495545   17440 command_runner.go:130] > kubeadm
	I1229 06:53:49.495573   17440 command_runner.go:130] > kubectl
	I1229 06:53:49.495580   17440 command_runner.go:130] > kubelet
	I1229 06:53:49.495602   17440 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 06:53:49.495647   17440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 06:53:49.521658   17440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 06:53:49.572562   17440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 06:53:49.658210   17440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1229 06:53:49.740756   17440 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 06:53:49.746333   17440 command_runner.go:130] > 192.168.39.121	control-plane.minikube.internal
	I1229 06:53:49.746402   17440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:53:50.073543   17440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:53:50.148789   17440 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 06:53:50.148837   17440 certs.go:195] generating shared ca certs ...
	I1229 06:53:50.148860   17440 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:53:50.149082   17440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 06:53:50.149152   17440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 06:53:50.149169   17440 certs.go:257] generating profile certs ...
	I1229 06:53:50.149320   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 06:53:50.149413   17440 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 06:53:50.149478   17440 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 06:53:50.149490   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 06:53:50.149508   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 06:53:50.149525   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 06:53:50.149541   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 06:53:50.149556   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 06:53:50.149573   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 06:53:50.149588   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 06:53:50.149607   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 06:53:50.149673   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 06:53:50.149723   17440 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 06:53:50.149738   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 06:53:50.149776   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 06:53:50.149837   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 06:53:50.149873   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 06:53:50.149950   17440 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 06:53:50.150003   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:50.150023   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 06:53:50.150038   17440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 06:53:50.150853   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 06:53:50.233999   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 06:53:50.308624   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 06:53:50.436538   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 06:53:50.523708   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 06:53:50.633239   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 06:53:50.746852   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 06:53:50.793885   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 06:53:50.894956   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 06:53:50.955149   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 06:53:51.018694   17440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 06:53:51.084938   17440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 06:53:51.127238   17440 ssh_runner.go:195] Run: openssl version
	I1229 06:53:51.136812   17440 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1229 06:53:51.136914   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.154297   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 06:53:51.175503   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182560   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182600   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.182653   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:53:51.195355   17440 command_runner.go:130] > b5213941
	I1229 06:53:51.195435   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 06:53:51.217334   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.233542   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 06:53:51.248778   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255758   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255826   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.255874   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 06:53:51.272983   17440 command_runner.go:130] > 51391683
	I1229 06:53:51.273077   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 06:53:51.303911   17440 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.325828   17440 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 06:53:51.347788   17440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360429   17440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360567   17440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.360625   17440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 06:53:51.369235   17440 command_runner.go:130] > 3ec20f2e
	I1229 06:53:51.369334   17440 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 06:53:51.381517   17440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387517   17440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:53:51.387548   17440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1229 06:53:51.387554   17440 command_runner.go:130] > Device: 253,1	Inode: 1052441     Links: 1
	I1229 06:53:51.387560   17440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1229 06:53:51.387568   17440 command_runner.go:130] > Access: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387572   17440 command_runner.go:130] > Modify: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387577   17440 command_runner.go:130] > Change: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387581   17440 command_runner.go:130] >  Birth: 2025-12-29 06:52:32.673454347 +0000
	I1229 06:53:51.387657   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 06:53:51.396600   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.397131   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 06:53:51.410180   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.410283   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 06:53:51.419062   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.419164   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 06:53:51.431147   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.431222   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 06:53:51.441881   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.442104   17440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 06:53:51.450219   17440 command_runner.go:130] > Certificate will not expire
	I1229 06:53:51.450295   17440 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:53:51.450396   17440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.474716   17440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 06:53:51.489086   17440 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1229 06:53:51.489107   17440 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1229 06:53:51.489113   17440 command_runner.go:130] > /var/lib/minikube/etcd:
	I1229 06:53:51.489117   17440 command_runner.go:130] > member
	I1229 06:53:51.489676   17440 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 06:53:51.489694   17440 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 06:53:51.489753   17440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 06:53:51.503388   17440 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:51.503948   17440 kubeconfig.go:125] found "functional-695625" server: "https://192.168.39.121:8441"
	I1229 06:53:51.504341   17440 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:53:51.504505   17440 kapi.go:59] client config for functional-695625: &rest.Config{Host:"https://192.168.39.121:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 06:53:51.504963   17440 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 06:53:51.504986   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 06:53:51.504992   17440 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 06:53:51.504998   17440 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 06:53:51.505004   17440 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 06:53:51.505012   17440 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 06:53:51.505089   17440 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 06:53:51.505414   17440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 06:53:51.521999   17440 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.121
	I1229 06:53:51.522047   17440 kubeadm.go:1161] stopping kube-system containers ...
	I1229 06:53:51.522115   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 06:53:51.550376   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.550407   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:51.550415   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:51.550422   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:51.550432   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:51.550441   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:51.550448   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:51.550455   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:51.550462   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:51.550470   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:51.550478   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:51.550485   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:51.550491   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:51.550496   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:51.550505   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:51.550511   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:51.550516   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:51.550521   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:51.550528   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:51.550540   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:51.550548   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:51.550556   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:51.550566   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:51.550572   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:51.550582   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:51.552420   17440 docker.go:487] Stopping containers: [6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629]
	I1229 06:53:51.552499   17440 ssh_runner.go:195] Run: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629
	I1229 06:53:51.976888   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:53:51.976911   17440 command_runner.go:130] > a014f32abcd0
	I1229 06:53:58.789216   17440 command_runner.go:130] > d81259f64136
	I1229 06:53:58.789240   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:53:58.789248   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:53:58.789252   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:53:58.789256   17440 command_runner.go:130] > 4ed279733477
	I1229 06:53:58.789259   17440 command_runner.go:130] > 1fc5fa7d9295
	I1229 06:53:58.789262   17440 command_runner.go:130] > 98261fa185f6
	I1229 06:53:58.789266   17440 command_runner.go:130] > b046056ff071
	I1229 06:53:58.789269   17440 command_runner.go:130] > b3cc8048f6d9
	I1229 06:53:58.789272   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:53:58.789275   17440 command_runner.go:130] > 64853b50a6c5
	I1229 06:53:58.789278   17440 command_runner.go:130] > bd7d900efd48
	I1229 06:53:58.789281   17440 command_runner.go:130] > 8911777281f4
	I1229 06:53:58.789284   17440 command_runner.go:130] > a123d63a8edb
	I1229 06:53:58.789287   17440 command_runner.go:130] > 548561c7ada8
	I1229 06:53:58.789295   17440 command_runner.go:130] > fd22eb0d6c14
	I1229 06:53:58.789299   17440 command_runner.go:130] > 14aafc386533
	I1229 06:53:58.789303   17440 command_runner.go:130] > abbe46bd960e
	I1229 06:53:58.789306   17440 command_runner.go:130] > 4b032678478a
	I1229 06:53:58.789310   17440 command_runner.go:130] > 0af491ef7c2f
	I1229 06:53:58.789314   17440 command_runner.go:130] > 5024b03252e3
	I1229 06:53:58.789317   17440 command_runner.go:130] > fe7b5da2f7fb
	I1229 06:53:58.789321   17440 command_runner.go:130] > ad82b94f7629
	I1229 06:53:58.790986   17440 ssh_runner.go:235] Completed: docker stop 6f69ba6a1553 a014f32abcd0 d81259f64136 fb6db97d8ffe 17fe16a2822a a79d99ad3fde 4ed279733477 1fc5fa7d9295 98261fa185f6 b046056ff071 b3cc8048f6d9 bd96b57aa9fc 64853b50a6c5 bd7d900efd48 8911777281f4 a123d63a8edb 548561c7ada8 fd22eb0d6c14 14aafc386533 abbe46bd960e 4b032678478a 0af491ef7c2f 5024b03252e3 fe7b5da2f7fb ad82b94f7629: (7.238443049s)
	I1229 06:53:58.791057   17440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 06:53:58.833953   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857522   17440 command_runner.go:130] > -rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	I1229 06:53:58.857550   17440 command_runner.go:130] > -rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.857561   17440 command_runner.go:130] > -rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.857571   17440 command_runner.go:130] > -rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.857610   17440 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 29 06:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec 29 06:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1974 Dec 29 06:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Dec 29 06:52 /etc/kubernetes/scheduler.conf
	
	I1229 06:53:58.857671   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:53:58.875294   17440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1229 06:53:58.876565   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:53:58.896533   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.896617   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:53:58.917540   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.936703   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.936777   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:53:58.957032   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:53:58.970678   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 06:53:58.970742   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:53:58.992773   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:53:59.007767   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.061402   17440 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 06:53:59.061485   17440 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1229 06:53:59.061525   17440 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1229 06:53:59.061923   17440 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 06:53:59.062217   17440 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1229 06:53:59.062329   17440 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1229 06:53:59.062606   17440 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1229 06:53:59.062852   17440 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1229 06:53:59.062948   17440 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1229 06:53:59.063179   17440 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 06:53:59.063370   17440 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 06:53:59.063615   17440 command_runner.go:130] > [certs] Using the existing "sa" key
	I1229 06:53:59.066703   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.686012   17440 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 06:53:59.686050   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1229 06:53:59.686059   17440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1229 06:53:59.686069   17440 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 06:53:59.686078   17440 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 06:53:59.686087   17440 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 06:53:59.686203   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:53:59.995495   17440 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 06:53:59.995529   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 06:53:59.995539   17440 command_runner.go:130] > [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 06:53:59.995545   17440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 06:53:59.995549   17440 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1229 06:53:59.995615   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.047957   17440 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 06:54:00.047983   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 06:54:00.053966   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 06:54:00.056537   17440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 06:54:00.059558   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 06:54:00.175745   17440 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 06:54:00.175825   17440 api_server.go:52] waiting for apiserver process to appear ...
	I1229 06:54:00.175893   17440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:00.233895   17440 command_runner.go:130] > 2416
	I1229 06:54:00.233940   17440 api_server.go:72] duration metric: took 58.126409ms to wait for apiserver process to appear ...
	I1229 06:54:00.233953   17440 api_server.go:88] waiting for apiserver healthz status ...
	I1229 06:54:00.233976   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:05.236821   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:05.236865   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:10.239922   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:10.239956   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:15.242312   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:15.242347   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:20.245667   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:20.245726   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:25.248449   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:25.248501   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:30.249241   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:30.249279   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:35.251737   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:35.251771   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:40.254366   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:40.254407   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:45.257232   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:45.257275   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:50.259644   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:50.259685   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:54:55.261558   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:54:55.261592   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:55:00.263123   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:55:00.263241   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:55:00.287429   17440 command_runner.go:130] > fb6db97d8ffe
	I1229 06:55:00.288145   17440 logs.go:282] 1 containers: [fb6db97d8ffe]
	I1229 06:55:00.288289   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:55:00.310519   17440 command_runner.go:130] > d81259f64136
	I1229 06:55:00.310561   17440 logs.go:282] 1 containers: [d81259f64136]
	I1229 06:55:00.310630   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:55:00.334579   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:55:00.334624   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:55:00.334692   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:55:00.353472   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:55:00.353503   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:55:00.354626   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:55:00.354714   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:55:00.376699   17440 command_runner.go:130] > 8911777281f4
	I1229 06:55:00.378105   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:55:00.378188   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:55:00.397976   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:55:00.399617   17440 logs.go:282] 1 containers: [17fe16a2822a]
	I1229 06:55:00.399707   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:55:00.419591   17440 logs.go:282] 0 containers: []
	W1229 06:55:00.419617   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:55:00.419665   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:55:00.440784   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:55:00.441985   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:55:00.442020   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:55:00.442030   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:55:00.465151   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.465192   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:55:00.465226   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.465237   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:55:00.465255   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.465271   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:55:00.465285   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.465823   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:55:00.465845   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:55:00.487618   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:55:00.487646   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:55:00.508432   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.508468   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:55:00.508482   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:55:00.508508   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:55:00.508521   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:55:00.508529   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.508541   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:55:00.508551   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.508560   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:55:00.508568   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:55:00.510308   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:55:00.510337   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:55:00.531862   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:55:00.532900   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:55:00.532924   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:55:00.554051   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:55:00.554084   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:55:00.554095   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:55:00.554109   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:55:00.554131   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:55:00.554148   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:55:00.554170   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:55:00.554189   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:55:00.554195   17440 command_runner.go:130] !  >
	I1229 06:55:00.554208   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:55:00.554224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:55:00.554250   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:55:00.554261   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:55:00.554273   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.554316   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:55:00.554327   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:55:00.554339   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:55:00.554350   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:55:00.554366   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:55:00.554381   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:55:00.554390   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:55:00.554402   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:55:00.554414   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:55:00.554427   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:55:00.554437   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:55:00.554452   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:55:00.556555   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:55:00.556578   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:55:00.581812   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:55:00.581848   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:55:00.581857   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:55:00.581865   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581874   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581881   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:55:00.581890   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581911   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:55:00.581919   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581930   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581942   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:55:00.581949   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581957   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581964   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581975   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581985   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.581993   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582003   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582010   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582020   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582030   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582037   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582044   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582051   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582070   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582080   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582088   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582097   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582105   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582115   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582125   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582141   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582152   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582160   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582170   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582177   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582186   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582193   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582203   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582211   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582221   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582228   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582235   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582242   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582252   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582261   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582269   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582276   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582287   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582294   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582302   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582312   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582319   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582329   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582336   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582346   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582353   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582363   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582370   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582378   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.582385   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:55:00.586872   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:55:00.586916   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:55:00.609702   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.609731   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.609766   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.609784   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.609811   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.609822   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:55:00.609831   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:55:00.609842   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609848   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.609857   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:55:00.609865   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.609879   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.609890   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.609906   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.609915   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.609923   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:55:00.609943   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.609954   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:55:00.609966   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.609976   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.609983   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:55:00.609990   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.609998   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610006   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610016   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610024   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610041   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610050   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610070   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610082   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610091   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610100   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610107   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:55:00.610115   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610123   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610131   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610141   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610159   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610168   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610179   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.610191   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.610203   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610216   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.610223   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610231   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610242   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:55:00.610251   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610258   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610265   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610271   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:55:00.610290   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610303   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610323   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610335   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610345   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610355   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610374   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610384   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610394   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610404   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610412   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610422   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610429   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610439   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610447   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610455   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610470   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.610476   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610483   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.610491   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.610500   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.610508   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.610516   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.610523   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.610531   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.610538   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.610550   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.610559   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.610567   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.610573   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.610579   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.610595   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.610607   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:55:00.610615   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.610622   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.610630   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.610637   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.610644   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:55:00.610653   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.610669   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.610680   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.610692   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.610705   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.610713   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:55:00.610735   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.610744   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:55:00.610755   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.610765   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.610772   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.610781   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.610789   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.610809   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.610818   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.610824   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.610853   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610867   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610881   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610896   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610909   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:55:00.610922   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610936   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610949   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610964   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.610979   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.610995   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611010   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611021   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:55:00.611037   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:55:00.611048   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:55:00.611062   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:55:00.611070   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:55:00.611079   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:55:00.611087   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:55:00.611096   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:55:00.611102   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:55:00.611109   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:55:00.611118   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:55:00.611125   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:55:00.611135   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:55:00.611146   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:55:00.611157   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:55:00.611167   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:55:00.611179   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:55:00.611186   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:55:00.611199   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611226   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611266   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611281   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611295   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611310   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611325   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611342   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611355   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611370   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611382   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.611404   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:55:00.611417   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:55:00.611435   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:55:00.611449   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:55:00.611464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:55:00.611476   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:55:00.611491   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:55:00.611502   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:55:00.611517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:55:00.611529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:55:00.611544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:55:00.611558   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:55:00.611574   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:55:00.611586   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:55:00.611601   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:55:00.611617   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:55:00.611631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:55:00.611645   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:55:00.611660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:55:00.611674   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:55:00.611689   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:55:00.611702   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:55:00.611712   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:55:00.611722   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:55:00.611732   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:55:00.611740   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:55:00.611751   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:55:00.611759   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:55:00.611767   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:55:00.611835   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:55:00.611849   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:55:00.611867   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:55:00.611877   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611888   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:55:00.611894   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.611901   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:55:00.611909   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:55:00.611917   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:55:00.611929   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:55:00.611937   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:55:00.611946   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:55:00.611954   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:55:00.611963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:55:00.611971   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:55:00.611981   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:55:00.611990   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:55:00.611999   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:55:00.612006   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:55:00.612019   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612031   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612046   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612063   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612079   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612093   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612112   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:55:00.612128   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612142   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612157   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612171   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612185   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612201   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612217   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612230   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612245   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612259   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612274   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612293   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:55:00.612309   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612323   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:55:00.612338   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:55:00.612354   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:55:00.612366   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:55:00.612380   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:55:00.612394   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.612407   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:55:00.629261   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:55:00.629293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:55:00.671242   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:55:00.671279   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       About a minute ago   Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671293   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:55:00.671303   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       About a minute ago   Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:55:00.671315   17440 command_runner.go:130] > fb6db97d8ffe4       5c6acd67e9cd1       About a minute ago   Exited              kube-apiserver            1                   4ed2797334771       kube-apiserver-functional-695625            kube-system
	I1229 06:55:00.671327   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       About a minute ago   Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:55:00.671337   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       About a minute ago   Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:55:00.671347   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:55:00.671362   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       2 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:55:00.673604   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:55:00.673628   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:55:00.695836   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077121    2634 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.695863   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077418    2634 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.695877   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.077955    2634 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.695887   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.109084    2634 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.695901   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.135073    2634 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.695910   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137245    2634 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.695920   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.137294    2634 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.695934   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.137340    2634 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.695942   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209773    2634 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.695952   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.209976    2634 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.695962   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210050    2634 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.695975   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210361    2634 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.696001   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210374    2634 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.696011   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210392    2634 policy_none.go:50] "Start"
	I1229 06:55:00.696020   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210408    2634 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.696029   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210421    2634 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696038   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210527    2634 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.696045   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.210534    2634 policy_none.go:44] "Start"
	I1229 06:55:00.696056   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.219245    2634 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.696067   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220437    2634 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.696078   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.220456    2634 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.696089   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.221071    2634 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.696114   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.226221    2634 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.696126   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239387    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696144   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.239974    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696155   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.240381    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696165   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.262510    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696185   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283041    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696208   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283087    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696228   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283118    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696247   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283135    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696268   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283151    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696288   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283163    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696309   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283175    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696329   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283189    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696357   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283209    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696378   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283223    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696400   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.283249    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696416   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.285713    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696428   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290012    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696442   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.290269    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696454   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: E1229 06:52:41.304300    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696466   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.336817    2634 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.696475   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351321    2634 kubelet_node_status.go:123] "Node was previously registered" node="functional-695625"
	I1229 06:55:00.696486   17440 command_runner.go:130] > Dec 29 06:52:41 functional-695625 kubelet[2634]: I1229 06:52:41.351415    2634 kubelet_node_status.go:77] "Successfully registered node" node="functional-695625"
	I1229 06:55:00.696493   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.033797    2634 apiserver.go:52] "Watching apiserver"
	I1229 06:55:00.696503   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.077546    2634 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1229 06:55:00.696527   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.181689    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-functional-695625" podStartSLOduration=3.181660018 podStartE2EDuration="3.181660018s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.180947341 +0000 UTC m=+1.223544146" watchObservedRunningTime="2025-12-29 06:52:42.181660018 +0000 UTC m=+1.224256834"
	I1229 06:55:00.696555   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.221952    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-functional-695625" podStartSLOduration=3.221936027 podStartE2EDuration="3.221936027s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.202120755 +0000 UTC m=+1.244717560" watchObservedRunningTime="2025-12-29 06:52:42.221936027 +0000 UTC m=+1.264532905"
	I1229 06:55:00.696583   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238774    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-695625" podStartSLOduration=3.238759924 podStartE2EDuration="3.238759924s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.238698819 +0000 UTC m=+1.281295638" watchObservedRunningTime="2025-12-29 06:52:42.238759924 +0000 UTC m=+1.281356744"
	I1229 06:55:00.696609   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.238905    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-functional-695625" podStartSLOduration=3.238868136 podStartE2EDuration="3.238868136s" podCreationTimestamp="2025-12-29 06:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:42.224445467 +0000 UTC m=+1.267042290" watchObservedRunningTime="2025-12-29 06:52:42.238868136 +0000 UTC m=+1.281464962"
	I1229 06:55:00.696622   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266475    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696634   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266615    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696651   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.266971    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696664   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: I1229 06:52:42.267487    2634 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696678   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287234    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-functional-695625\" already exists" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.696690   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.287316    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696704   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.292837    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-functional-695625\" already exists" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.696718   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293863    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696730   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.293764    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-functional-695625\" already exists" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.696745   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.294163    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696757   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298557    2634 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-functional-695625\" already exists" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.696770   17440 command_runner.go:130] > Dec 29 06:52:42 functional-695625 kubelet[2634]: E1229 06:52:42.298633    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696782   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.272537    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.696807   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273148    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696835   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273501    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.696850   17440 command_runner.go:130] > Dec 29 06:52:43 functional-695625 kubelet[2634]: E1229 06:52:43.273627    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696863   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279056    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.696877   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: E1229 06:52:44.279353    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.696887   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.754123    2634 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1229 06:55:00.696899   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 kubelet[2634]: I1229 06:52:44.756083    2634 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1229 06:55:00.696917   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.407560    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mg5\" (UniqueName: \"kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696938   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408503    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-proxy\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696958   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.408957    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-xtables-lock\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696976   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: I1229 06:52:45.409131    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-lib-modules\") pod \"kube-proxy-g7lp9\" (UID: \"9c2c2ac1-7fa0-427d-b78e-ee14e169895a\") " pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.696991   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528153    2634 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697004   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528186    2634 projected.go:196] Error preparing data for projected volume kube-api-access-94mg5 for pod kube-system/kube-proxy-g7lp9: configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697032   17440 command_runner.go:130] > Dec 29 06:52:45 functional-695625 kubelet[2634]: E1229 06:52:45.528293    2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5 podName:9c2c2ac1-7fa0-427d-b78e-ee14e169895a nodeName:}" failed. No retries permitted until 2025-12-29 06:52:46.028266861 +0000 UTC m=+5.070863673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94mg5" (UniqueName: "kubernetes.io/projected/9c2c2ac1-7fa0-427d-b78e-ee14e169895a-kube-api-access-94mg5") pod "kube-proxy-g7lp9" (UID: "9c2c2ac1-7fa0-427d-b78e-ee14e169895a") : configmap "kube-root-ca.crt" not found
	I1229 06:55:00.697044   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:55:00.697064   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697084   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:55:00.697104   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697124   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:55:00.697138   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:55:00.697151   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.697170   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697192   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:55:00.697206   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697229   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:55:00.697245   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697268   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:55:00.697296   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:55:00.697310   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697322   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697336   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697361   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:55:00.697376   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.697388   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.697402   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.697420   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697442   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:55:00.697463   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:55:00.697483   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:55:00.697499   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697515   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:55:00.697526   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697536   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697554   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697576   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:55:00.697591   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:55:00.697608   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.697621   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.697637   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697651   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697669   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697710   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697726   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697746   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697762   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697775   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697790   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697815   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697830   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697846   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697860   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697876   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.697892   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697906   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697923   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.697937   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697951   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697966   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.697986   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698002   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698029   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698046   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698060   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698081   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698097   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698113   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698126   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698141   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698155   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698172   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698187   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698202   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698218   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:55:00.698235   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698274   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698293   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698309   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698325   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698341   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698362   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698378   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698395   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698408   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698424   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698439   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698455   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698469   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698484   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698501   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698514   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698527   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698541   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698554   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698577   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698590   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698606   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698620   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698634   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698650   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698666   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698682   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698696   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698711   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698727   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698743   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698756   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698769   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698784   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698808   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698823   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698840   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698853   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698868   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698886   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:55:00.698903   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698916   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698933   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698948   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:55:00.698962   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698976   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:55:00.698993   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:55:00.699007   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.699018   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699031   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699042   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699067   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.699078   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699105   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699119   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.699130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699145   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699157   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:55:00.699180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:55:00.699195   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.699207   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:55:00.699224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:55:00.699256   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:55:00.699269   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699284   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.699310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.699330   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.699343   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:55:00.699362   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:55:00.699380   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.699407   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:55:00.699439   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:55:00.699460   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:55:00.699477   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699497   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.699515   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:55:00.699533   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.699619   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699640   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.699660   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699709   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:55:00.699722   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.699738   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699750   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.699763   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699774   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.699785   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699807   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.699820   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699834   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699846   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699861   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.699872   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.699886   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:55:00.699931   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.699946   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.699956   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:55:00.699972   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700008   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700031   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700053   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700067   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.700078   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.700091   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:55:00.700102   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700116   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.700129   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700139   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700159   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700168   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:55:00.700179   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.700190   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700199   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700217   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700228   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700240   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.700250   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:55:00.700268   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700281   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:55:00.700291   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:55:00.700310   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:55:00.700321   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700331   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700349   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700364   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700375   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700394   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700405   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700415   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:55:00.700427   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:55:00.700454   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:55:00.700474   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:55:00.700515   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:55:00.700529   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:55:00.700539   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:55:00.700558   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:55:00.700570   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700578   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:55:00.700584   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:55:00.700590   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700597   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:55:00.700603   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:55:00.700612   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:55:00.700620   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:55:00.700631   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:55:00.700641   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:55:00.700652   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:55:00.700662   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:55:00.700674   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:55:00.700684   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:55:00.700696   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:55:00.700707   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:55:00.700717   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:55:00.700758   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:55:00.700770   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:55:00.700779   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:55:00.700790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:55:00.700816   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:55:00.700831   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:55:00.700846   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:55:00.700858   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:55:00.700866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:55:00.700879   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:55:00.700891   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:55:00.700905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:55:00.700912   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:55:00.700921   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:55:00.700932   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.700943   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:55:00.700951   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:55:00.700963   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:55:00.700971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:55:00.700986   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:55:00.701000   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:55:00.701008   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:55:00.701020   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:55:00.701037   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:55:00.701046   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:55:00.701061   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:55:00.701073   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:55:00.701082   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:55:00.701093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:55:00.701100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:55:00.701114   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:55:00.701124   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:55:00.701143   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:55:00.701170   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:55:00.701178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:55:00.701188   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:55:00.701201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:55:00.701210   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:55:00.701218   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:55:00.701226   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:55:00.701237   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701246   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:55:00.701256   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:55:00.701266   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:55:00.701277   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:55:00.701287   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:55:00.701297   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:55:00.701308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:55:00.701322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701334   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701348   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701361   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:55:00.701372   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:55:00.701385   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:55:00.701399   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:55:00.701410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:55:00.701422   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:55:00.701433   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701447   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:55:00.701458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:55:00.701471   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:55:00.701483   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:55:00.701496   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:55:00.701508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:55:00.701521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:55:00.701533   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:55:00.701550   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701567   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:55:00.701581   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701592   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701611   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:55:00.701625   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701642   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:55:00.701678   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:55:00.701695   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:55:00.701705   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:55:00.701716   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701735   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:55:00.701749   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.701764   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:55:00.701780   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:55:00.701807   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701827   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701847   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701867   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701886   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:55:00.701907   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:55:00.701928   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701971   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:55:00.701995   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702014   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:55:00.702027   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:55:00.755255   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:55:00.755293   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:55:00.771031   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:55:00.771066   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:55:00.771079   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:55:00.771088   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:55:00.771097   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:55:00.771103   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:55:00.771109   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:55:00.771116   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:55:00.771121   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:55:00.771126   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:55:00.771131   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:55:00.771136   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:55:00.771143   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771153   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:55:00.771158   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:55:00.771165   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:55:00.771175   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771185   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:55:00.771191   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:55:00.771196   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:55:00.771202   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:55:00.772218   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:55:00.772246   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:56:00.863293   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:56:00.863340   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.091082059s)
	W1229 06:56:00.863385   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:56:00.863402   17440 logs.go:123] Gathering logs for kube-apiserver [fb6db97d8ffe] ...
	I1229 06:56:00.863420   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6db97d8ffe"
	I1229 06:56:00.897112   17440 command_runner.go:130] ! I1229 06:53:50.588377       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:56:00.897142   17440 command_runner.go:130] ! I1229 06:53:50.597275       1 server.go:150] Version: v1.35.0
	I1229 06:56:00.897153   17440 command_runner.go:130] ! I1229 06:53:50.597323       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:00.897164   17440 command_runner.go:130] ! E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:56:00.898716   17440 logs.go:138] Found kube-apiserver [fb6db97d8ffe] problem: E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.898738   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:00.898750   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:00.935530   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938590   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:00.938653   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:00.938666   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:00.938679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:00.938689   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:00.938712   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.938728   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:00.938838   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:00.938875   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:00.938892   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:00.938902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:00.938913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:00.938922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:00.938935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:00.938946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:00.938958   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:00.938969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:00.938978   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:00.938993   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:00.939003   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:00.939022   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:00.939035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:00.939046   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:00.939053   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:00.939062   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:00.939071   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:00.939081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:00.939091   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:00.939111   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:00.939126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:00.939142   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:00.939162   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.939181   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:00.939213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:00.939249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:00.939258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:00.939274   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:00.939289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:00.939302   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:00.939324   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:00.939342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.939352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:00.939362   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:00.939377   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:00.939389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:00.939404   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:00.939423   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:00.939439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:00.939458   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939467   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:00.939478   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:00.939513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:00.939528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:00.939544   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939564   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:00.939586   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:00.939603   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939616   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:00.939882   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:00.939915   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:00.939932   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:00.939947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:00.939960   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:00.939998   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.940030   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940064   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:00.940122   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940150   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:00.940167   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:00.940187   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940204   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:00.940257   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940277   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:00.940301   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940334   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:00.940371   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940389   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940425   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:00.940447   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:00.940473   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:00.955065   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955108   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:56:00.955188   17440 out.go:285] X Problems detected in kube-apiserver [fb6db97d8ffe]:
	W1229 06:56:00.955202   17440 out.go:285]   E1229 06:53:50.606724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:56:00.955209   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:56:00.955215   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:56:10.957344   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:56:15.961183   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:56:15.961319   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:56:15.981705   17440 command_runner.go:130] > 18d0015c724a
	I1229 06:56:15.982641   17440 logs.go:282] 1 containers: [18d0015c724a]
	I1229 06:56:15.982732   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:56:16.002259   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:56:16.002292   17440 command_runner.go:130] > d81259f64136
	I1229 06:56:16.002322   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:56:16.002399   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:56:16.021992   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:56:16.022032   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:56:16.022113   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:56:16.048104   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:56:16.048133   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:56:16.049355   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:56:16.049441   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:56:16.071523   17440 command_runner.go:130] > 8911777281f4
	I1229 06:56:16.072578   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:56:16.072668   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:56:16.092921   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:56:16.092948   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:56:16.092975   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:56:16.093047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:56:16.113949   17440 logs.go:282] 0 containers: []
	W1229 06:56:16.113983   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:56:16.114047   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:56:16.135700   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:56:16.135739   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:56:16.135766   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:56:16.135786   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:56:16.152008   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:56:16.152038   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:56:16.152046   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:56:16.152054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:56:16.152063   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:56:16.152069   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:56:16.152076   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:56:16.152081   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:56:16.152086   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:56:16.152091   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:56:16.152096   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:56:16.152102   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:56:16.152107   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152112   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:56:16.152119   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:56:16.152128   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:56:16.152148   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152164   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:56:16.152180   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:56:16.152190   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:56:16.152201   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:56:16.152209   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:56:16.152217   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:56:16.153163   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:56:16.153192   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:56:16.174824   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:56:16.174856   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.174862   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:56:16.174873   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:56:16.174892   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:56:16.174900   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:56:16.174913   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:56:16.174920   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:56:16.174924   17440 command_runner.go:130] !  >
	I1229 06:56:16.174931   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:56:16.174941   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:56:16.174957   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:56:16.174966   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:56:16.174975   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.174985   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:56:16.174994   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:56:16.175003   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:56:16.175012   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:56:16.175024   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:56:16.175033   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:56:16.175040   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:56:16.175050   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:56:16.175074   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:56:16.175325   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:56:16.175351   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:56:16.175362   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:56:16.177120   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:56:16.177144   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:56:16.222627   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:56:16.222665   17440 command_runner.go:130] > 18d0015c724a8       5c6acd67e9cd1       5 seconds ago       Exited              kube-apiserver            3                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:56:16.222684   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       16 seconds ago      Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222707   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       16 seconds ago      Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:56:16.222730   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       2 minutes ago       Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222749   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       2 minutes ago       Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:56:16.222768   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       2 minutes ago       Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:56:16.222810   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       2 minutes ago       Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:56:16.222831   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       2 minutes ago       Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:56:16.222851   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:56:16.222879   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       3 minutes ago       Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:56:16.225409   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:56:16.225439   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:56:16.247416   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247449   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.247516   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.247533   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.247545   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.247555   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.247582   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.247605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.247698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.247722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:56:16.247733   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.247745   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:56:16.247753   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:56:16.247762   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.247774   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.247782   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.247807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:56:16.247816   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.247825   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.247840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.247851   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.247867   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.247878   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.247886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.247893   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.247902   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.247914   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:56:16.247924   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:56:16.247935   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.247952   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.247965   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.247975   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.247988   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.248000   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.248016   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.248035   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.248063   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.248074   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.248083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.248092   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.248103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.248113   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.248126   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.248136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.248153   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.248166   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.248177   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:56:16.248186   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:56:16.248198   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.248208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248219   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:56:16.248228   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248240   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.248248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:56:16.248260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:56:16.248275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.248287   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248295   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.248304   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.248312   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.248320   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.248341   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.248352   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.254841   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:56:16.254869   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:56:16.278647   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278679   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:56:16.278723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:56:16.278736   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:56:16.278750   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:56:16.278759   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:56:16.278780   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.278809   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:56:16.278890   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:56:16.278913   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:56:16.278923   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:56:16.278935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:56:16.278946   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:56:16.278957   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:56:16.278971   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:56:16.278982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:56:16.278996   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:56:16.279006   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:56:16.279014   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:56:16.279031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:56:16.279040   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:56:16.279072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:56:16.279083   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:56:16.279091   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:56:16.279101   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:56:16.279110   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:56:16.279121   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:56:16.279132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:56:16.279142   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:56:16.279159   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:56:16.279173   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:56:16.279183   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:56:16.279195   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279208   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:56:16.279226   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:56:16.279249   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:56:16.279260   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:56:16.279275   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:56:16.279289   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:56:16.279300   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:56:16.279313   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:56:16.279322   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279332   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:56:16.279343   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:56:16.279359   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:56:16.279374   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:56:16.279386   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:56:16.279396   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:56:16.279406   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:56:16.279418   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279429   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:56:16.279439   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279451   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:56:16.279460   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:56:16.279469   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:56:16.279479   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279494   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:56:16.279503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:56:16.279513   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279523   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:56:16.279531   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:56:16.279541   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:56:16.279551   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:56:16.279562   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:56:16.279570   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:56:16.279585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.279603   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279622   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:56:16.279661   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279676   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:56:16.279688   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:56:16.279698   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279711   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:56:16.279730   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279741   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:56:16.279751   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279764   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:56:16.279785   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279805   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279825   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:56:16.279836   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:56:16.279852   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:56:16.287590   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:56:16.287613   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:56:16.310292   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:56:16.310320   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:56:16.331009   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:56:16.331044   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:56:16.331054   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:56:16.331067   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331076   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331083   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:56:16.331093   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331114   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:56:16.331232   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331256   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331268   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:56:16.331275   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331289   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331298   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331316   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331329   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331341   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331355   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331363   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331374   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331386   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331400   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331413   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331425   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331441   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331454   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331468   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331478   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331488   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331496   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331506   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331519   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331529   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331537   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331547   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331555   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331564   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331572   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331580   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331592   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331604   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331618   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331629   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331645   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331659   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331673   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331689   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331703   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331716   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331728   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331740   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331756   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331771   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331784   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331816   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331830   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331847   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331863   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331879   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331894   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.331908   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:56:16.336243   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:56:16.336267   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:56:16.358115   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358145   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358155   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358165   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358177   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.358186   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:56:16.358194   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:56:16.358203   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358209   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358220   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:56:16.358229   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358241   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358254   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358266   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358278   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358285   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358307   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358315   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358328   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358336   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358343   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:56:16.358350   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358360   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358369   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358377   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358385   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358399   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358408   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358415   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358425   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358436   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358445   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358455   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:56:16.358463   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358474   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358481   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358491   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358500   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358508   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358515   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358530   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.358543   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.358555   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358576   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.358584   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.358593   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.358604   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:56:16.358614   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.358621   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.358628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.358635   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.358644   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:56:16.358653   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.358666   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.358685   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.358697   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.358707   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.358716   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:56:16.358735   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.358745   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:56:16.358755   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.358763   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.358805   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.358818   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.358827   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.358837   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.358847   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.358854   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.358861   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358867   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.358874   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.358893   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.358904   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.358913   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.358921   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.358930   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.358942   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.358950   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.358959   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.358970   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.358979   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.358986   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.358992   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359001   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359011   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:56:16.359021   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359029   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359036   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359042   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359052   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:56:16.359060   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359071   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359084   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359106   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359113   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359135   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359144   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:56:16.359154   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.359164   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.359172   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.359182   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.359190   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.359198   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.359206   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.359213   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.359244   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359260   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359275   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359288   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359300   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:56:16.359313   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359328   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359343   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359357   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.359372   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359386   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359399   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359410   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:56:16.359422   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:56:16.359435   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:56:16.359442   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:56:16.359452   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:56:16.359460   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:56:16.359468   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:56:16.359474   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:56:16.359481   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:56:16.359487   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:56:16.359494   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:56:16.359502   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:56:16.359511   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:56:16.359521   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:56:16.359532   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:56:16.359544   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:56:16.359553   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:56:16.359561   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:56:16.359574   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359590   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359602   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359617   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359630   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359646   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359660   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359676   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359689   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359706   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359719   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359731   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359744   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.359763   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:56:16.359779   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:56:16.359800   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:56:16.359813   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:56:16.359827   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:56:16.359837   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:56:16.359852   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:56:16.359864   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:56:16.359878   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:56:16.359890   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:56:16.359904   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:56:16.359916   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:56:16.359932   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:56:16.359945   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:56:16.359960   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:56:16.359975   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:56:16.359988   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:56:16.360003   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:56:16.360019   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:56:16.360037   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:56:16.360051   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:56:16.360064   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:56:16.360074   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:56:16.360085   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:56:16.360093   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:56:16.360102   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:56:16.360113   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:56:16.360121   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:56:16.360130   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:56:16.360163   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:56:16.360172   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:56:16.360189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:56:16.360197   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:56:16.360210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360218   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:56:16.360225   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:56:16.360236   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:56:16.360245   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:56:16.360255   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:56:16.360263   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:56:16.360271   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:56:16.360280   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:56:16.360288   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:56:16.360297   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:56:16.360308   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:56:16.360317   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:56:16.360326   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:56:16.360338   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360353   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360365   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360380   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360392   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360410   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360426   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:56:16.360441   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360454   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360467   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360482   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360494   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360510   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360525   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360538   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360553   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360566   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360582   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360599   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:56:16.360617   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360628   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:56:16.360643   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:56:16.360656   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360671   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:56:16.360682   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:56:16.360699   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360711   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360726   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.360736   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:56:16.360749   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.360762   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:56:16.377860   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:56:16.377891   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:56:16.394828   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.406131    2634 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	I1229 06:56:16.394877   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519501    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64sn\" (UniqueName: \"kubernetes.io/projected/00a95e37-1394-45a7-a376-b195e31e3e9c-kube-api-access-b64sn\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394896   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519550    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a95e37-1394-45a7-a376-b195e31e3e9c-config-volume\") pod \"coredns-7d764666f9-wfq7m\" (UID: \"00a95e37-1394-45a7-a376-b195e31e3e9c\") " pod="kube-system/coredns-7d764666f9-wfq7m"
	I1229 06:56:16.394920   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519571    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394952   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 kubelet[2634]: I1229 06:52:46.519587    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"coredns-7d764666f9-9mrnn\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") " pod="kube-system/coredns-7d764666f9-9mrnn"
	I1229 06:56:16.394976   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.411642    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605"
	I1229 06:56:16.394988   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.545186    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.395012   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731196    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f201ca-6d54-4e15-9584-396fb1486f3c-tmp\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395045   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 kubelet[2634]: I1229 06:52:47.731252    2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc5d\" (UniqueName: \"kubernetes.io/projected/b5f201ca-6d54-4e15-9584-396fb1486f3c-kube-api-access-ghc5d\") pod \"storage-provisioner\" (UID: \"b5f201ca-6d54-4e15-9584-396fb1486f3c\") " pod="kube-system/storage-provisioner"
	I1229 06:56:16.395075   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.628275    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395109   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.634714    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9mrnn" podStartSLOduration=2.634698273 podStartE2EDuration="2.634698273s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.631484207 +0000 UTC m=+7.674081027" watchObservedRunningTime="2025-12-29 06:52:48.634698273 +0000 UTC m=+7.677295093"
	I1229 06:56:16.395143   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: E1229 06:52:48.649761    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395179   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.694857    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfq7m" podStartSLOduration=2.694842541 podStartE2EDuration="2.694842541s" podCreationTimestamp="2025-12-29 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.672691157 +0000 UTC m=+7.715287974" watchObservedRunningTime="2025-12-29 06:52:48.694842541 +0000 UTC m=+7.737439360"
	I1229 06:56:16.395221   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 kubelet[2634]: I1229 06:52:48.728097    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.728082592 podStartE2EDuration="1.728082592s" podCreationTimestamp="2025-12-29 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.696376688 +0000 UTC m=+7.738973499" watchObservedRunningTime="2025-12-29 06:52:48.728082592 +0000 UTC m=+7.770679413"
	I1229 06:56:16.395242   17440 command_runner.go:130] > Dec 29 06:52:49 functional-695625 kubelet[2634]: E1229 06:52:49.674249    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395263   17440 command_runner.go:130] > Dec 29 06:52:50 functional-695625 kubelet[2634]: E1229 06:52:50.680852    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395283   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.223368    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395324   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: I1229 06:52:52.243928    2634 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g7lp9" podStartSLOduration=7.243911092 podStartE2EDuration="7.243911092s" podCreationTimestamp="2025-12-29 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:52:48.744380777 +0000 UTC m=+7.786977597" watchObservedRunningTime="2025-12-29 06:52:52.243911092 +0000 UTC m=+11.286507895"
	I1229 06:56:16.395347   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.396096    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.395368   17440 command_runner.go:130] > Dec 29 06:52:52 functional-695625 kubelet[2634]: E1229 06:52:52.693687    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.395390   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: E1229 06:52:53.390926    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.395423   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979173    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395451   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979225    2634 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") pod \"c4313c5f-3b86-48de-8f3c-02d7e007542a\" (UID: \"c4313c5f-3b86-48de-8f3c-02d7e007542a\") "
	I1229 06:56:16.395496   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.979732    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	I1229 06:56:16.395529   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 kubelet[2634]: I1229 06:52:53.981248    2634 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj" pod "c4313c5f-3b86-48de-8f3c-02d7e007542a" (UID: "c4313c5f-3b86-48de-8f3c-02d7e007542a"). InnerVolumeSpecName "kube-api-access-lc5xj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	I1229 06:56:16.395551   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079447    2634 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4313c5f-3b86-48de-8f3c-02d7e007542a-config-volume\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395578   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.079521    2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc5xj\" (UniqueName: \"kubernetes.io/projected/c4313c5f-3b86-48de-8f3c-02d7e007542a-kube-api-access-lc5xj\") on node \"functional-695625\" DevicePath \"\""
	I1229 06:56:16.395597   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.715729    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395618   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.756456    2634 scope.go:122] "RemoveContainer" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395641   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: E1229 06:52:54.758451    2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f" containerID="67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395678   17440 command_runner.go:130] > Dec 29 06:52:54 functional-695625 kubelet[2634]: I1229 06:52:54.758508    2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"} err="failed to get container status \"67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f"
	I1229 06:56:16.395702   17440 command_runner.go:130] > Dec 29 06:52:55 functional-695625 kubelet[2634]: I1229 06:52:55.144582    2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4313c5f-3b86-48de-8f3c-02d7e007542a" path="/var/lib/kubelet/pods/c4313c5f-3b86-48de-8f3c-02d7e007542a/volumes"
	I1229 06:56:16.395719   17440 command_runner.go:130] > Dec 29 06:52:58 functional-695625 kubelet[2634]: E1229 06:52:58.655985    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.395743   17440 command_runner.go:130] > Dec 29 06:53:20 functional-695625 kubelet[2634]: E1229 06:53:20.683378    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.395770   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913108    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395806   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913180    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395831   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 kubelet[2634]: E1229 06:53:25.913193    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395859   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915141    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.395885   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915181    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395903   17440 command_runner.go:130] > Dec 29 06:53:26 functional-695625 kubelet[2634]: E1229 06:53:26.915192    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395929   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139490    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.395956   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139600    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.395981   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139623    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396000   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.139634    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396027   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917175    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396052   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917271    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396087   17440 command_runner.go:130] > Dec 29 06:53:27 functional-695625 kubelet[2634]: E1229 06:53:27.917284    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396114   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918722    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396138   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918780    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396161   17440 command_runner.go:130] > Dec 29 06:53:28 functional-695625 kubelet[2634]: E1229 06:53:28.918792    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396186   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139097    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396267   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139170    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396295   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139187    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396315   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.139214    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396339   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921730    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396362   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921808    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396387   17440 command_runner.go:130] > Dec 29 06:53:29 functional-695625 kubelet[2634]: E1229 06:53:29.921823    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396413   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.923664    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396433   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924161    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396458   17440 command_runner.go:130] > Dec 29 06:53:30 functional-695625 kubelet[2634]: E1229 06:53:30.924185    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396484   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139396    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396508   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139458    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396526   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139472    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396550   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.139485    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396585   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396609   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396634   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396662   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396687   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:56:16.396711   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396739   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396763   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396786   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396821   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396848   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.396872   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396891   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396919   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.396943   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396966   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.396989   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397016   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397064   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397089   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397114   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397139   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397161   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397187   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397211   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397233   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397256   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397281   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397307   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397330   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397358   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397387   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397424   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397450   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397477   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397500   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397521   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397544   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397571   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397594   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397618   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397644   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397668   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397686   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397742   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:56:16.397766   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397786   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397818   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397849   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:56:16.397872   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397897   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:56:16.397918   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:56:16.397940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.397961   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.397984   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.398006   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398027   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.398047   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.398071   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398100   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398122   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398141   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.398162   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398209   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:56:16.398244   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:56:16.398272   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.398294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:56:16.398317   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398350   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:56:16.398371   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:56:16.398394   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398413   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.398456   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.398481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.398498   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:56:16.398525   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:56:16.398557   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.398599   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:56:16.398632   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:56:16.398661   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:56:16.398683   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.398746   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:56:16.398769   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.398813   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398843   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.398873   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398910   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398942   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:56:16.398963   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.398985   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399007   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.399028   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399052   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.399082   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399104   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.399121   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399145   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399170   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399191   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399209   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399231   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.399253   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399275   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399295   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:56:16.399309   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399328   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.399366   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399402   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399416   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.399427   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.399440   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:56:16.399454   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399467   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.399491   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399517   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399553   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399565   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:56:16.399576   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.399588   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399598   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399618   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399629   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399640   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.399653   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:56:16.399671   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399684   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.399694   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.399724   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:56:16.399741   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399752   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399771   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399782   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.399801   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.399822   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.399834   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399845   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.399857   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:56:16.399866   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:56:16.399885   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:56:16.399928   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:56:16.400087   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.400109   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.400130   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.400140   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400147   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:56:16.400153   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:56:16.400162   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400169   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:56:16.400175   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:56:16.400184   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:56:16.400193   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.400201   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:56:16.400213   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:56:16.400222   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:56:16.400233   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:56:16.400243   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.400253   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:56:16.400262   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:56:16.400272   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:56:16.400281   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:56:16.400693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:56:16.400713   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:56:16.400724   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:56:16.400734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:56:16.400742   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:56:16.400751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:56:16.400760   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:56:16.400768   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:56:16.400780   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:56:16.400812   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:56:16.400833   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:56:16.400853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:56:16.400868   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:56:16.400877   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:56:16.400887   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.400896   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:56:16.400903   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:56:16.400915   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:56:16.400924   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:56:16.400936   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:56:16.400950   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:56:16.400961   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:56:16.400972   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.400985   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:56:16.400993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:56:16.401003   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:56:16.401016   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:56:16.401027   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:56:16.401036   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:56:16.401045   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:56:16.401053   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:56:16.401070   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:56:16.401083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:56:16.401100   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:56:16.401132   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:56:16.401141   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:56:16.401150   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:56:16.401160   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:56:16.401173   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:56:16.401180   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:56:16.401189   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:56:16.401198   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401209   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:56:16.401217   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:56:16.401228   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:56:16.401415   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:56:16.401435   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:56:16.401444   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:56:16.401456   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:56:16.401467   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401486   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401508   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401529   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:56:16.401553   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:56:16.401575   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:56:16.401589   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:56:16.401602   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:56:16.401614   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:56:16.401628   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401640   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:56:16.401653   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:56:16.401667   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:56:16.401679   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:56:16.401693   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:56:16.401706   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:56:16.401720   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:56:16.401733   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.401745   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.401762   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:56:16.401816   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401840   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401871   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:56:16.401900   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.401920   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:56:16.401958   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.401977   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.401987   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402002   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402019   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:56:16.402033   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402048   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:56:16.402065   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402085   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402107   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402134   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402169   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402204   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:56:16.402228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:56:16.402250   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402272   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402294   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:56:16.402314   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402335   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:56:16.402349   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402367   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:56:16.402405   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402421   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402433   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402444   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402530   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402557   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402569   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402585   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:56:16.402600   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402639   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.402655   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:56:16.402666   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:56:16.402677   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402697   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.402714   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:56:16.402726   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.402737   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402752   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:56:16.402917   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:56:16.402934   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:56:16.402947   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.402959   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.402972   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.402996   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403011   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403026   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403043   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403056   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403070   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403082   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403096   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403110   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403125   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403138   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403152   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403292   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403310   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403325   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403339   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403361   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403376   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403389   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403402   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403417   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403428   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403450   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403464   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:56:16.403480   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403495   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403506   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403636   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403671   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403686   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403702   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403720   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403739   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403753   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:56:16.403767   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403780   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.403806   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403820   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403833   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403850   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403871   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:56:16.403890   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:56:16.403914   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.403936   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.403952   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.403976   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.403994   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404007   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:56:16.404022   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:56:16.404034   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:56:16.404046   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:56:16.404066   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:56:16.404085   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:56:16.404122   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:56:16.454878   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:56:16.454917   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:56:16.478085   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.478126   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:56:16.478136   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:56:16.478148   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:56:16.478155   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:56:16.478166   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.478175   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:56:16.478185   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:56:16.478194   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:56:16.478203   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.478825   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:56:16.478843   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:56:16.501568   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.501592   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.501601   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.501610   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.501623   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.501630   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.501636   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.501982   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:56:16.501996   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:56:16.524487   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:56:16.524514   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:56:16.524523   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:56:16.524767   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:56:16.524788   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:56:16.524805   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:56:16.524812   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:56:16.526406   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:56:16.526437   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:57:16.604286   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:57:16.606268   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.079810784s)
	W1229 06:57:16.606306   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:57:16.606317   17440 logs.go:123] Gathering logs for kube-apiserver [18d0015c724a] ...
	I1229 06:57:16.606331   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d0015c724a"
	I1229 06:57:16.636305   17440 command_runner.go:130] ! Error response from daemon: No such container: 18d0015c724a
	W1229 06:57:16.636367   17440 logs.go:130] failed kube-apiserver [18d0015c724a]: command: /bin/bash -c "docker logs --tail 400 18d0015c724a" /bin/bash -c "docker logs --tail 400 18d0015c724a": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 18d0015c724a
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 18d0015c724a
	
	** /stderr **
	I1229 06:57:16.636376   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:57:16.636391   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:57:16.657452   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:19.160135   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:57:24.162053   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:57:24.162161   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1229 06:57:24.182182   17440 command_runner.go:130] > b206d555ad19
	I1229 06:57:24.183367   17440 logs.go:282] 1 containers: [b206d555ad19]
	I1229 06:57:24.183464   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1229 06:57:24.206759   17440 command_runner.go:130] > 6b7711ee25a2
	I1229 06:57:24.206821   17440 command_runner.go:130] > d81259f64136
	I1229 06:57:24.206853   17440 logs.go:282] 2 containers: [6b7711ee25a2 d81259f64136]
	I1229 06:57:24.206926   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1229 06:57:24.228856   17440 command_runner.go:130] > 6f69ba6a1553
	I1229 06:57:24.228897   17440 logs.go:282] 1 containers: [6f69ba6a1553]
	I1229 06:57:24.228968   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1229 06:57:24.247867   17440 command_runner.go:130] > 4d49952084c9
	I1229 06:57:24.247890   17440 command_runner.go:130] > a79d99ad3fde
	I1229 06:57:24.249034   17440 logs.go:282] 2 containers: [4d49952084c9 a79d99ad3fde]
	I1229 06:57:24.249130   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1229 06:57:24.268209   17440 command_runner.go:130] > 8911777281f4
	I1229 06:57:24.269160   17440 logs.go:282] 1 containers: [8911777281f4]
	I1229 06:57:24.269243   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1229 06:57:24.288837   17440 command_runner.go:130] > f48fc04e3475
	I1229 06:57:24.288871   17440 command_runner.go:130] > 17fe16a2822a
	I1229 06:57:24.290245   17440 logs.go:282] 2 containers: [f48fc04e3475 17fe16a2822a]
	I1229 06:57:24.290337   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1229 06:57:24.312502   17440 logs.go:282] 0 containers: []
	W1229 06:57:24.312531   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:57:24.312592   17440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1229 06:57:24.334811   17440 command_runner.go:130] > bd96b57aa9fc
	I1229 06:57:24.334849   17440 logs.go:282] 1 containers: [bd96b57aa9fc]
	I1229 06:57:24.334875   17440 logs.go:123] Gathering logs for kube-apiserver [b206d555ad19] ...
	I1229 06:57:24.334888   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b206d555ad19"
	I1229 06:57:24.357541   17440 command_runner.go:130] ! I1229 06:57:22.434262       1 options.go:263] external host was not specified, using 192.168.39.121
	I1229 06:57:24.357567   17440 command_runner.go:130] ! I1229 06:57:22.436951       1 server.go:150] Version: v1.35.0
	I1229 06:57:24.357577   17440 command_runner.go:130] ! I1229 06:57:22.436991       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.357602   17440 command_runner.go:130] ! E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	W1229 06:57:24.359181   17440 logs.go:138] Found kube-apiserver [b206d555ad19] problem: E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:57:24.359206   17440 logs.go:123] Gathering logs for kube-controller-manager [f48fc04e3475] ...
	I1229 06:57:24.359218   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48fc04e3475"
	I1229 06:57:24.381077   17440 command_runner.go:130] ! I1229 06:56:01.090404       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:57:24.381103   17440 command_runner.go:130] ! I1229 06:56:01.103535       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:57:24.381113   17440 command_runner.go:130] ! I1229 06:56:01.103787       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.381121   17440 command_runner.go:130] ! I1229 06:56:01.105458       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:57:24.381131   17440 command_runner.go:130] ! I1229 06:56:01.105665       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.381137   17440 command_runner.go:130] ! I1229 06:56:01.105907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:57:24.381144   17440 command_runner.go:130] ! I1229 06:56:01.105924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:57:24.382680   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:57:24.382711   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:57:24.427354   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	I1229 06:57:24.427382   17440 command_runner.go:130] > b206d555ad194       5c6acd67e9cd1       2 seconds ago        Exited              kube-apiserver            5                   d3819cc8ab802       kube-apiserver-functional-695625            kube-system
	I1229 06:57:24.427400   17440 command_runner.go:130] > f48fc04e34751       2c9a4b058bd7e       About a minute ago   Running             kube-controller-manager   2                   0a96e34d38f8c       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427411   17440 command_runner.go:130] > 6b7711ee25a2d       0a108f7189562       About a minute ago   Running             etcd                      2                   173054afc2f39       etcd-functional-695625                      kube-system
	I1229 06:57:24.427421   17440 command_runner.go:130] > 4d49952084c92       550794e3b12ac       3 minutes ago        Running             kube-scheduler            2                   fefef7c5591ea       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427441   17440 command_runner.go:130] > 6f69ba6a1553a       aa5e3ebc0dfed       3 minutes ago        Exited              coredns                   1                   a014f32abcd01       coredns-7d764666f9-wfq7m                    kube-system
	I1229 06:57:24.427454   17440 command_runner.go:130] > d81259f64136c       0a108f7189562       3 minutes ago        Exited              etcd                      1                   1fc5fa7d92959       etcd-functional-695625                      kube-system
	I1229 06:57:24.427465   17440 command_runner.go:130] > 17fe16a2822a8       2c9a4b058bd7e       3 minutes ago        Exited              kube-controller-manager   1                   98261fa185f6e       kube-controller-manager-functional-695625   kube-system
	I1229 06:57:24.427477   17440 command_runner.go:130] > a79d99ad3fde3       550794e3b12ac       3 minutes ago        Exited              kube-scheduler            1                   b046056ff071b       kube-scheduler-functional-695625            kube-system
	I1229 06:57:24.427488   17440 command_runner.go:130] > bd96b57aa9fce       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       0                   64853b50a6c5e       storage-provisioner                         kube-system
	I1229 06:57:24.427509   17440 command_runner.go:130] > 8911777281f41       32652ff1bbe6b       4 minutes ago        Exited              kube-proxy                0                   548561c7ada8f       kube-proxy-g7lp9                            kube-system
	I1229 06:57:24.430056   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:57:24.430095   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:57:24.453665   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239338    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453712   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.239383    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453738   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244411    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453770   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.244504    2634 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453809   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458139    2634 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter=""
	I1229 06:57:24.453838   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.458218    2634 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to get pod or container map: failed to list all containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453867   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926377    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453891   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926435    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453911   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.926447    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453928   17440 command_runner.go:130] > Dec 29 06:53:31 functional-695625 kubelet[2634]: E1229 06:53:31.994121    2634 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453945   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927827    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.453961   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927867    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.453974   17440 command_runner.go:130] > Dec 29 06:53:32 functional-695625 kubelet[2634]: E1229 06:53:32.927930    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454002   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140553    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454022   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140635    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454040   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140653    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454058   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.140664    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454074   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930020    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454087   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930083    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454103   17440 command_runner.go:130] > Dec 29 06:53:33 functional-695625 kubelet[2634]: E1229 06:53:33.930129    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454120   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932311    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454135   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932363    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454149   17440 command_runner.go:130] > Dec 29 06:53:34 functional-695625 kubelet[2634]: E1229 06:53:34.932375    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454165   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140618    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454179   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140679    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454194   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140697    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454208   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.140709    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454224   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933321    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454246   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933382    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454262   17440 command_runner.go:130] > Dec 29 06:53:35 functional-695625 kubelet[2634]: E1229 06:53:35.933393    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454276   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241324    2634 log.go:32] "Status from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454294   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.241391    2634 kubelet.go:3115] "Container runtime sanity check failed" err="rpc error: code = Unknown desc = failed to get docker info from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454310   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935649    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454326   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935930    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454342   17440 command_runner.go:130] > Dec 29 06:53:36 functional-695625 kubelet[2634]: E1229 06:53:36.935948    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454358   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140389    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454371   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140507    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454386   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140525    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454401   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.140536    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454423   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937258    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454447   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937350    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454472   17440 command_runner.go:130] > Dec 29 06:53:37 functional-695625 kubelet[2634]: E1229 06:53:37.937364    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454500   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939069    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454519   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939129    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454533   17440 command_runner.go:130] > Dec 29 06:53:38 functional-695625 kubelet[2634]: E1229 06:53:38.939141    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454549   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139354    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="state:{}"
	I1229 06:57:24.454565   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139413    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454579   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139428    2634 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454593   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.139440    2634 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454608   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941237    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="<nil>"
	I1229 06:57:24.454625   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941285    2634 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454640   17440 command_runner.go:130] > Dec 29 06:53:39 functional-695625 kubelet[2634]: E1229 06:53:39.941296    2634 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.454655   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.109014    2634 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.97s"
	I1229 06:57:24.454667   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.125762    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.454680   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.129855    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.454697   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.131487    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.454714   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.140438    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.454729   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.454741   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.454816   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454842   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454855   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454870   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.454881   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.454896   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454912   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:57:24.454940   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:57:24.454957   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.454969   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:57:24.454987   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455012   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:57:24.455025   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:57:24.455039   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455055   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.455081   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455097   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.455110   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:57:24.455125   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:57:24.455144   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.455165   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:57:24.455186   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:57:24.455204   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:57:24.455224   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455243   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.455275   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:57:24.455294   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455310   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455326   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.455345   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455366   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455386   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:57:24.455404   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.455423   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455446   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.455472   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455490   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.455506   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455528   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.455550   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455573   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455588   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455603   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455615   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.455628   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.455640   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455657   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455669   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:57:24.455681   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455699   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.455720   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455739   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.455750   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.455810   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.455823   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:57:24.455835   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.455848   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.455860   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.455872   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.455892   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.455904   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:57:24.455916   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.455930   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.455967   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.455990   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456008   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456019   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.456031   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:57:24.456052   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456067   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.456078   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.456100   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:57:24.456114   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456124   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456144   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456159   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456169   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456191   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456205   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456216   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.456229   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:57:24.456239   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:57:24.456260   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:57:24.456304   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:57:24.456318   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.456331   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.456352   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.456364   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456372   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:57:24.456379   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:57:24.456386   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456396   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:57:24.456406   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:57:24.456423   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:57:24.456441   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:57:24.456458   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:57:24.456472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:57:24.456487   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:57:24.456503   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:57:24.456520   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:57:24.456540   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:57:24.456560   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:57:24.456573   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:57:24.456584   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:57:24.456626   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:57:24.456639   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:57:24.456647   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:57:24.456657   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:57:24.456665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:57:24.456676   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:57:24.456685   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:57:24.456695   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:57:24.456703   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:57:24.456714   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:57:24.456726   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:57:24.456739   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:57:24.456748   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:57:24.456761   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:57:24.456771   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.456782   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:57:24.456790   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:57:24.456811   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:57:24.456821   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:57:24.456832   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:57:24.456845   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:57:24.456853   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:57:24.456866   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456875   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:57:24.456885   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:57:24.456893   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:57:24.456907   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:57:24.456918   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:57:24.456927   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:57:24.456937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:57:24.456947   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:57:24.456959   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:57:24.456971   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:57:24.456990   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457011   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.457023   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:57:24.457032   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:57:24.457044   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:57:24.457054   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:57:24.457067   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:57:24.457074   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:57:24.457083   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:57:24.457093   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457105   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:57:24.457112   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:57:24.457125   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:57:24.457133   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:57:24.457145   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:57:24.457154   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:57:24.457168   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:57:24.457178   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457192   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457205   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457220   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:57:24.457235   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:57:24.457247   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:57:24.457258   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:57:24.457271   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:57:24.457284   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:57:24.457299   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457310   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:57:24.457322   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:57:24.457333   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:57:24.457345   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:57:24.457359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:57:24.457370   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:57:24.457381   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:57:24.457396   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.457410   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457436   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:57:24.457460   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457481   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457500   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:57:24.457515   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457533   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:57:24.457586   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.457604   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.457613   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.457633   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457649   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:57:24.457664   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.457680   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:57:24.457697   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.457717   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457740   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457763   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457785   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457817   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:57:24.457904   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:57:24.457927   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457948   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457976   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:57:24.457996   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458019   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:57:24.458034   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458050   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:57:24.458090   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458106   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458116   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458130   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458141   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458158   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458170   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458184   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.458198   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458263   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.458295   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.458316   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.458339   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458367   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.458389   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.458409   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458429   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458447   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:57:24.458468   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:57:24.458490   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.458512   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458529   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458542   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458572   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458587   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458602   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458617   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458632   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458644   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458659   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458674   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458686   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.458702   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458717   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.458732   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458746   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458762   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458777   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458790   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458824   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458839   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458852   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.458865   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458879   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458889   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458911   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458925   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.458939   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.458952   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.458964   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.458983   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.458998   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459016   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459031   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459048   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459062   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.459090   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459104   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459118   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459132   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459145   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459158   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459174   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:57:24.459186   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:57:24.459201   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459215   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459225   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459247   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459261   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459274   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:57:24.459286   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459302   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459314   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459334   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459352   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459392   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.459418   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.459438   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.459461   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459483   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459502   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459515   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459537   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459552   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459567   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459579   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459599   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459613   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459634   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.459649   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459662   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459676   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459693   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459707   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459720   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459741   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459753   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:57:24.459769   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459782   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459806   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459829   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459845   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459859   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459875   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459897   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459911   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.459924   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.459938   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.459950   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.459972   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.459987   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460000   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460016   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460036   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460053   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:57:24.460094   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.460108   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460124   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:57:24.460136   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:57:24.460148   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460162   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:57:24.460182   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460195   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460206   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:57:24.460221   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460236   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:57:24.460254   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:57:24.460269   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460283   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460296   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460308   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:57:24.460321   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460334   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460347   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460367   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460381   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:57:24.460395   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:57:24.460414   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:57:24.460450   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:57:24.460499   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:57:24.513870   17440 logs.go:123] Gathering logs for etcd [d81259f64136] ...
	I1229 06:57:24.513913   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d81259f64136"
	I1229 06:57:24.542868   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517725Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.542904   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.517828Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:57:24.542974   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.517848Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:57:24.542992   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519323Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:57:24.543020   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.519372Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:57:24.543037   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.519700Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:57:24.543067   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.522332Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543085   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.530852Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:57:24.543199   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.531312Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:57:24.543237   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.533505Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00006a930}"}
	I1229 06:57:24.543258   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.562961Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:57:24.543276   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.566967Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"33.344174ms"}
	I1229 06:57:24.543291   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.569353Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":497}
	I1229 06:57:24.543306   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596637Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:57:24.543327   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596694Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:57:24.543344   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.596795Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:57:24.543365   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.620855Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":497}
	I1229 06:57:24.543380   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.621587Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:57:24.543393   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624518Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:57:24.543419   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624664Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:57:24.543437   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624700Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:57:24.543464   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624712Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:57:24.543483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624720Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:57:24.543499   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624728Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:57:24.543511   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624764Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:57:24.543561   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:57:24.543585   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 2"}
	I1229 06:57:24.543605   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.624867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 2, commit: 497, applied: 0, lastindex: 497, lastterm: 2]"}
	I1229 06:57:24.543623   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:50.634002Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:57:24.543659   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.644772Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:57:24.543680   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.681530Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:57:24.543701   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686046Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:57:24.543722   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686350Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.543744   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.686391Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:57:24.543770   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687141Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:57:24.543821   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687399Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:57:24.543840   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687425Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:57:24.543865   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687475Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:57:24.543886   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687536Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:57:24.543908   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687564Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:57:24.543927   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687571Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:57:24.543945   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687702Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.543962   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.687713Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:57:24.543980   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:57:24.544010   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.692847Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:57:24.544031   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.694703Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:57:24.544065   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830725Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	I1229 06:57:24.544084   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	I1229 06:57:24.544103   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	I1229 06:57:24.544120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544136   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.830936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 3"}
	I1229 06:57:24.544157   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544176   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:57:24.544193   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832148Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 3"}
	I1229 06:57:24.544213   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.832166Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	I1229 06:57:24.544224   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835446Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544248   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.835384Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:57:24.544264   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:57:24.544283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.839733Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544298   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:57:24.544314   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:57:24.544331   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.851748Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:57:24.544345   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.856729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:57:24.544364   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:50.869216Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:57:24.544381   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706108Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	I1229 06:57:24.544405   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:51.706269Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.544430   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:51.706381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544465   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.707655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	I1229 06:57:24.544517   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.709799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544537   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.709913Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbdf275f553df7c2","current-leader-member-id":"cbdf275f553df7c2"}
	I1229 06:57:24.544554   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710255Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	I1229 06:57:24.544575   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710690Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544595   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.710782Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	I1229 06:57:24.544623   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.710832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544641   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.710742Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	I1229 06:57:24.544662   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544683   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:53:58.711035Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.121:2379: use of closed network connection"}
	I1229 06:57:24.544711   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.711045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544730   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544767   17440 command_runner.go:130] ! {"level":"error","ts":"2025-12-29T06:53:58.717551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.121:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	I1229 06:57:24.544807   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717601Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:57:24.544828   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:53:58.717654Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-695625","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"]}
	I1229 06:57:24.552509   17440 logs.go:123] Gathering logs for coredns [6f69ba6a1553] ...
	I1229 06:57:24.552540   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f69ba6a1553"
	I1229 06:57:24.575005   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:57:24.575036   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:57:24.597505   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597545   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597560   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597577   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597596   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:57:24.597610   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:57:24.597628   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:57:24.597642   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597654   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.597667   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:57:24.597682   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.597705   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.597733   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.597753   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.597765   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.597773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:57:24.597803   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.597814   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:57:24.597825   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.597834   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.597841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:57:24.597848   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.597856   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.597866   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.597874   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.597883   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.597900   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.597909   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.597916   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.597925   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.597936   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.597944   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.597953   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:57:24.597960   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.597973   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.597981   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.597991   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.597999   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598010   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598017   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598029   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598041   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598054   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598067   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598074   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598084   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598095   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:57:24.598104   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598111   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598117   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598126   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598132   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:57:24.598141   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598154   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598174   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598186   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598196   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598205   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598224   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598235   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598246   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598256   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598264   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598273   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598281   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598289   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598297   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598306   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598314   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598320   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.598327   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598334   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.598345   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.598354   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.598365   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.598373   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.598381   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.598389   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.598400   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.598415   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.598431   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.598447   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.598463   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.598476   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598492   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598503   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:57:24.598513   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.598522   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.598531   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.598538   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.598545   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:57:24.598555   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.598578   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.598591   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.598602   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.598613   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.598621   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:57:24.598642   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.598653   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:57:24.598664   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.598674   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.598683   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.598693   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.598701   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.598716   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.598724   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.598732   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.598760   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598774   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598787   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598815   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598832   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:57:24.598845   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598860   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598873   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598889   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.598904   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598918   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.598933   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598946   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:57:24.598958   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:57:24.598973   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:57:24.598980   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:57:24.598989   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:57:24.598999   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:57:24.599008   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:57:24.599015   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:57:24.599022   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:57:24.599030   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:57:24.599036   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:57:24.599043   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:57:24.599054   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:57:24.599065   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:57:24.599077   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:57:24.599088   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:57:24.599099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:57:24.599107   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:57:24.599120   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599138   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599151   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599168   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599185   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599198   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599213   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599228   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599241   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599257   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599270   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599285   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599297   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.599319   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:57:24.599331   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:57:24.599346   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:57:24.599359   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:57:24.599376   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:57:24.599387   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:57:24.599405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:57:24.599423   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:57:24.599452   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:57:24.599472   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:57:24.599489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:57:24.599503   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:57:24.599517   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:57:24.599529   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:57:24.599544   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:57:24.599559   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:57:24.599572   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:57:24.599587   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:57:24.599602   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:57:24.599615   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:57:24.599631   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:57:24.599644   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:57:24.599654   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:57:24.599664   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:57:24.599673   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:57:24.599682   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:57:24.599692   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:57:24.599700   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:57:24.599710   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:57:24.599747   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:57:24.599756   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:57:24.599772   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:57:24.599782   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599789   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:57:24.599806   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599814   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:57:24.599822   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:57:24.599830   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:57:24.599841   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:57:24.599849   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:57:24.599860   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:57:24.599868   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:57:24.599879   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:57:24.599886   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:57:24.599896   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:57:24.599907   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:57:24.599914   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:57:24.599922   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:57:24.599934   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599953   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599970   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.599983   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600000   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600017   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600034   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:57:24.600049   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600063   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600079   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600092   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600107   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600121   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600137   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600152   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600164   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600177   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600190   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600207   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:57:24.600223   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600235   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:57:24.600247   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:57:24.600261   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600276   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:57:24.600288   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:57:24.600304   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600317   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600331   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600345   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600357   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600373   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600386   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.600403   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:57:24.600423   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:57:24.600448   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:57:24.600472   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:57:24.600490   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:57:24.619075   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:57:24.619123   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:58:24.700496   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:58:24.700542   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.081407425s)
	W1229 06:58:24.700578   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:58:24.700591   17440 logs.go:123] Gathering logs for etcd [6b7711ee25a2] ...
	I1229 06:58:24.700607   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b7711ee25a2"
	I1229 06:58:24.726206   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.924768Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726238   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925193Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	I1229 06:58:24.726283   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925252Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.121:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.121:2380","--initial-cluster=functional-695625=https://192.168.39.121:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.121:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.121:2380","--name=functional-695625","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib
/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	I1229 06:58:24.726296   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925487Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1229 06:58:24.726311   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.925602Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1229 06:58:24.726321   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925710Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.121:2380"]}
	I1229 06:58:24.726342   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.925810Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726358   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.934471Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"]}
	I1229 06:58:24.726438   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.935217Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"d2809cf","go-version":"go1.24.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-695625","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["
*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	I1229 06:58:24.726461   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.937503Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000068080}"}
	I1229 06:58:24.726472   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940423Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	I1229 06:58:24.726483   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.940850Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.479356ms"}
	I1229 06:58:24.726492   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.941120Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":499}
	I1229 06:58:24.726503   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945006Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1229 06:58:24.726517   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945707Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1019904,"backend-size":"1.0 MB","backend-size-in-use-bytes":999424,"backend-size-in-use":"999 kB"}
	I1229 06:58:24.726528   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.945966Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	I1229 06:58:24.726540   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.951906Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","commit-index":499}
	I1229 06:58:24.726552   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952063Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	I1229 06:58:24.726560   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952160Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	I1229 06:58:24.726577   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952338Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:cbdf275f553df7c2 RaftAttributes:{PeerURLs:[https://192.168.39.121:2380] IsLearner:false} Attributes:{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}}"}
	I1229 06:58:24.726590   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952385Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	I1229 06:58:24.726607   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952396Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","recovered-remote-peer-id":"cbdf275f553df7c2","recovered-remote-peer-urls":["https://192.168.39.121:2380"],"recovered-remote-peer-is-learner":false}
	I1229 06:58:24.726618   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952406Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	I1229 06:58:24.726629   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952416Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	I1229 06:58:24.726636   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952460Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	I1229 06:58:24.726647   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=()"}
	I1229 06:58:24.726657   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"cbdf275f553df7c2 became follower at term 3"}
	I1229 06:58:24.726670   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.952619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft cbdf275f553df7c2 [peers: [], term: 3, commit: 499, applied: 0, lastindex: 499, lastterm: 3]"}
	I1229 06:58:24.726680   17440 command_runner.go:130] ! {"level":"warn","ts":"2025-12-29T06:56:00.955095Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	I1229 06:58:24.726698   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.961356Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":465}
	I1229 06:58:24.726711   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.967658Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1229 06:58:24.726723   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.968487Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"cbdf275f553df7c2","timeout":"7s"}
	I1229 06:58:24.726735   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969020Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"cbdf275f553df7c2"}
	I1229 06:58:24.726750   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969260Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.6.6","cluster-id":"6f38b6947d3f1f22","cluster-version":"3.6"}
	I1229 06:58:24.726765   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.969708Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"cbdf275f553df7c2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I1229 06:58:24.726784   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970043Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1229 06:58:24.726826   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970828Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1229 06:58:24.726839   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971046Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1229 06:58:24.726848   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970057Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	I1229 06:58:24.726858   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971258Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.121:2380"}
	I1229 06:58:24.726870   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970152Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1229 06:58:24.726883   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971336Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1229 06:58:24.726896   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.971370Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1229 06:58:24.726906   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.970393Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	I1229 06:58:24.726922   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972410Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"],"added-peer-is-learner":false}
	I1229 06:58:24.726935   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:00.972698Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","from":"3.6","to":"3.6"}
	I1229 06:58:24.726947   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cbdf275f553df7c2 is starting a new election at term 3"}
	I1229 06:58:24.726956   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cbdf275f553df7c2 became pre-candidate at term 3"}
	I1229 06:58:24.726969   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.353992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 3"}
	I1229 06:58:24.726982   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.726997   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.354031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cbdf275f553df7c2 became candidate at term 4"}
	I1229 06:58:24.727009   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727020   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cbdf275f553df7c2 has received 1 MsgVoteResp votes and 0 vote rejections"}
	I1229 06:58:24.727029   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.355940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cbdf275f553df7c2 became leader at term 4"}
	I1229 06:58:24.727039   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.356018Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 4"}
	I1229 06:58:24.727056   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358237Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:functional-695625 ClientURLs:[https://192.168.39.121:2379]}","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	I1229 06:58:24.727064   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358323Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727072   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358268Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	I1229 06:58:24.727081   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1229 06:58:24.727089   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.358859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1229 06:58:24.727100   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360417Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727109   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.360952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1229 06:58:24.727120   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1229 06:58:24.727132   17440 command_runner.go:130] ! {"level":"info","ts":"2025-12-29T06:56:01.363760Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	I1229 06:58:24.733042   17440 logs.go:123] Gathering logs for kube-scheduler [a79d99ad3fde] ...
	I1229 06:58:24.733064   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79d99ad3fde"
	I1229 06:58:24.755028   17440 command_runner.go:130] ! I1229 06:53:51.269699       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.755231   17440 logs.go:123] Gathering logs for kube-proxy [8911777281f4] ...
	I1229 06:58:24.755256   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8911777281f4"
	I1229 06:58:24.776073   17440 command_runner.go:130] ! I1229 06:52:47.703648       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:58:24.776109   17440 command_runner.go:130] ! I1229 06:52:47.791676       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.776120   17440 command_runner.go:130] ! I1229 06:52:47.897173       1 shared_informer.go:377] "Caches are synced"
	I1229 06:58:24.776135   17440 command_runner.go:130] ! I1229 06:52:47.900073       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.121"]
	I1229 06:58:24.776154   17440 command_runner.go:130] ! E1229 06:52:47.906310       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:58:24.776162   17440 command_runner.go:130] ! I1229 06:52:48.206121       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
	I1229 06:58:24.776180   17440 command_runner.go:130] ! 	error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1229 06:58:24.776188   17440 command_runner.go:130] ! 	Perhaps ip6tables or your kernel needs to be upgraded.
	I1229 06:58:24.776195   17440 command_runner.go:130] !  >
	I1229 06:58:24.776212   17440 command_runner.go:130] ! I1229 06:52:48.209509       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 06:58:24.776224   17440 command_runner.go:130] ! I1229 06:52:48.210145       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:58:24.776249   17440 command_runner.go:130] ! I1229 06:52:48.253805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:58:24.776257   17440 command_runner.go:130] ! I1229 06:52:48.255046       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:58:24.776266   17440 command_runner.go:130] ! I1229 06:52:48.255076       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.776282   17440 command_runner.go:130] ! I1229 06:52:48.262205       1 config.go:200] "Starting service config controller"
	I1229 06:58:24.776296   17440 command_runner.go:130] ! I1229 06:52:48.262238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:58:24.776307   17440 command_runner.go:130] ! I1229 06:52:48.262258       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:58:24.776328   17440 command_runner.go:130] ! I1229 06:52:48.262261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:58:24.776350   17440 command_runner.go:130] ! I1229 06:52:48.262278       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:58:24.776366   17440 command_runner.go:130] ! I1229 06:52:48.262282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:58:24.776376   17440 command_runner.go:130] ! I1229 06:52:48.270608       1 config.go:309] "Starting node config controller"
	I1229 06:58:24.776388   17440 command_runner.go:130] ! I1229 06:52:48.271311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:58:24.776404   17440 command_runner.go:130] ! I1229 06:52:48.271337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:58:24.776420   17440 command_runner.go:130] ! I1229 06:52:48.363324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:58:24.776439   17440 command_runner.go:130] ! I1229 06:52:48.363427       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 06:58:24.776453   17440 command_runner.go:130] ! I1229 06:52:48.363671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:58:24.778558   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:24.778595   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:24.793983   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:24.794025   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:24.794040   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:24.794054   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:24.794069   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:24.794079   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:24.794096   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:24.794106   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:24.794117   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:24.794125   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:24.794136   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:24.794146   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:24.794160   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794167   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:24.794178   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:24.794186   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:24.794196   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794207   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:24.794215   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:24.794221   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:24.794229   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:24.794241   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:24.794252   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:24.794260   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.794271   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:24.795355   17440 logs.go:123] Gathering logs for kube-scheduler [4d49952084c9] ...
	I1229 06:58:24.795387   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d49952084c9"
	I1229 06:58:24.820602   17440 command_runner.go:130] ! I1229 06:53:52.882050       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.820635   17440 command_runner.go:130] ! W1229 06:54:52.896472       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	I1229 06:58:24.820646   17440 command_runner.go:130] ! W1229 06:54:52.896499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1229 06:58:24.820657   17440 command_runner.go:130] ! W1229 06:54:52.896506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 06:58:24.820665   17440 command_runner.go:130] ! I1229 06:54:52.913597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 06:58:24.820672   17440 command_runner.go:130] ! I1229 06:54:52.913622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.820681   17440 command_runner.go:130] ! I1229 06:54:52.915784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 06:58:24.820692   17440 command_runner.go:130] ! I1229 06:54:52.915816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:58:24.820698   17440 command_runner.go:130] ! I1229 06:54:52.915823       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 06:58:24.820705   17440 command_runner.go:130] ! I1229 06:54:52.915940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.822450   17440 logs.go:123] Gathering logs for kube-controller-manager [17fe16a2822a] ...
	I1229 06:58:24.822473   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17fe16a2822a"
	I1229 06:58:24.844122   17440 command_runner.go:130] ! I1229 06:53:51.283329       1 serving.go:386] Generated self-signed cert in-memory
	I1229 06:58:24.844156   17440 command_runner.go:130] ! I1229 06:53:51.303666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1229 06:58:24.844170   17440 command_runner.go:130] ! I1229 06:53:51.303706       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:24.844184   17440 command_runner.go:130] ! I1229 06:53:51.307865       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 06:58:24.844201   17440 command_runner.go:130] ! I1229 06:53:51.308287       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:24.844210   17440 command_runner.go:130] ! I1229 06:53:51.309479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1229 06:58:24.844218   17440 command_runner.go:130] ! I1229 06:53:51.309545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 06:58:24.845429   17440 logs.go:123] Gathering logs for storage-provisioner [bd96b57aa9fc] ...
	I1229 06:58:24.845453   17440 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd96b57aa9fc"
	I1229 06:58:24.867566   17440 command_runner.go:130] ! I1229 06:52:48.539098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 06:58:24.867597   17440 command_runner.go:130] ! I1229 06:52:48.550309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 06:58:24.867607   17440 command_runner.go:130] ! I1229 06:52:48.550373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 06:58:24.867615   17440 command_runner.go:130] ! W1229 06:52:48.552935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867622   17440 command_runner.go:130] ! W1229 06:52:48.563735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867633   17440 command_runner.go:130] ! I1229 06:52:48.564362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 06:58:24.867653   17440 command_runner.go:130] ! I1229 06:52:48.565422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867681   17440 command_runner.go:130] ! I1229 06:52:48.565143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb65e16-c2f7-4c19-a059-8ef64f8f3f2e", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868 became leader
	I1229 06:58:24.867694   17440 command_runner.go:130] ! W1229 06:52:48.576668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867704   17440 command_runner.go:130] ! W1229 06:52:48.582743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867719   17440 command_runner.go:130] ! I1229 06:52:48.665711       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-695625_c1740534-d530-4bf5-8b9a-b5bede576868!
	I1229 06:58:24.867734   17440 command_runner.go:130] ! W1229 06:52:50.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867750   17440 command_runner.go:130] ! W1229 06:52:50.593815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867763   17440 command_runner.go:130] ! W1229 06:52:52.597431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867817   17440 command_runner.go:130] ! W1229 06:52:52.602815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867836   17440 command_runner.go:130] ! W1229 06:52:54.606663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867848   17440 command_runner.go:130] ! W1229 06:52:54.612650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867859   17440 command_runner.go:130] ! W1229 06:52:56.616395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867871   17440 command_runner.go:130] ! W1229 06:52:56.622404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867883   17440 command_runner.go:130] ! W1229 06:52:58.626804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867891   17440 command_runner.go:130] ! W1229 06:52:58.637257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867901   17440 command_runner.go:130] ! W1229 06:53:00.640728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867914   17440 command_runner.go:130] ! W1229 06:53:00.646446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867926   17440 command_runner.go:130] ! W1229 06:53:02.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867944   17440 command_runner.go:130] ! W1229 06:53:02.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867956   17440 command_runner.go:130] ! W1229 06:53:04.664091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867972   17440 command_runner.go:130] ! W1229 06:53:04.669806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867982   17440 command_runner.go:130] ! W1229 06:53:06.674203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.867997   17440 command_runner.go:130] ! W1229 06:53:06.680002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868013   17440 command_runner.go:130] ! W1229 06:53:08.683483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868028   17440 command_runner.go:130] ! W1229 06:53:08.688934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868048   17440 command_runner.go:130] ! W1229 06:53:10.693644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868063   17440 command_runner.go:130] ! W1229 06:53:10.706122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868071   17440 command_runner.go:130] ! W1229 06:53:12.709949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868081   17440 command_runner.go:130] ! W1229 06:53:12.715753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868098   17440 command_runner.go:130] ! W1229 06:53:14.719191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868111   17440 command_runner.go:130] ! W1229 06:53:14.728100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868127   17440 command_runner.go:130] ! W1229 06:53:16.731658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868140   17440 command_runner.go:130] ! W1229 06:53:16.737463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868153   17440 command_runner.go:130] ! W1229 06:53:18.741304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868164   17440 command_runner.go:130] ! W1229 06:53:18.746708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868177   17440 command_runner.go:130] ! W1229 06:53:20.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868192   17440 command_runner.go:130] ! W1229 06:53:20.755989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868207   17440 command_runner.go:130] ! W1229 06:53:22.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868221   17440 command_runner.go:130] ! W1229 06:53:22.772421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868236   17440 command_runner.go:130] ! W1229 06:53:24.776403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868247   17440 command_runner.go:130] ! W1229 06:53:24.783232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868258   17440 command_runner.go:130] ! W1229 06:53:26.786665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868275   17440 command_runner.go:130] ! W1229 06:53:26.792239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868290   17440 command_runner.go:130] ! W1229 06:53:28.796420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868304   17440 command_runner.go:130] ! W1229 06:53:28.805511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868320   17440 command_runner.go:130] ! W1229 06:53:30.808544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868332   17440 command_runner.go:130] ! W1229 06:53:30.816066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868342   17440 command_runner.go:130] ! W1229 06:53:32.820090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868358   17440 command_runner.go:130] ! W1229 06:53:32.826208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868373   17440 command_runner.go:130] ! W1229 06:53:34.829865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868385   17440 command_runner.go:130] ! W1229 06:53:34.835774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868400   17440 command_runner.go:130] ! W1229 06:53:36.839291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868414   17440 command_runner.go:130] ! W1229 06:53:36.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868425   17440 command_runner.go:130] ! W1229 06:53:38.856432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.868438   17440 command_runner.go:130] ! W1229 06:53:38.862360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 06:58:24.872821   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872842   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 06:58:24.872901   17440 out.go:285] X Problems detected in kube-apiserver [b206d555ad19]:
	W1229 06:58:24.872915   17440 out.go:285]   E1229 06:57:22.441956       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	I1229 06:58:24.872919   17440 out.go:374] Setting ErrFile to fd 2...
	I1229 06:58:24.872923   17440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:58:34.875381   17440 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
	I1229 06:58:39.877679   17440 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:39.877779   17440 kubeadm.go:602] duration metric: took 4m48.388076341s to restartPrimaryControlPlane
	W1229 06:58:39.877879   17440 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1229 06:58:39.877946   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:39.930050   17440 command_runner.go:130] ! W1229 06:58:39.921577    8187 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:49.935089   17440 command_runner.go:130] ! W1229 06:58:49.926653    8187 reset.go:141] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	I1229 06:58:49.935131   17440 command_runner.go:130] ! W1229 06:58:49.926754    8187 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:50.998307   17440 command_runner.go:130] > [reset] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I1229 06:58:50.998341   17440 command_runner.go:130] > [reset] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
	I1229 06:58:50.998348   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:50.998357   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/minikube/etcd
	I1229 06:58:50.998366   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:50.998372   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:50.998386   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:50.998407   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:50.998417   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:50.998428   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:50.998434   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:50.998442   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:50.998458   17440 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (11.120499642s)
	I1229 06:58:50.998527   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.015635   17440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:58:51.028198   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.040741   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.040780   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.040811   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.040826   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040865   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.040877   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.040925   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.051673   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052090   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.052155   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.064755   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.076455   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076517   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.076577   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.088881   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.099253   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099652   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.099710   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.111487   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.122532   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122905   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.122972   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.135143   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.355420   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355450   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.355543   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.355556   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.355615   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355625   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.355790   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.355837   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.356251   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356265   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.356317   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.356324   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.357454   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357471   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.357544   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.357561   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	W1229 06:58:51.357680   17440 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 06:58:51.357753   17440 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 06:58:51.401004   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.401036   17440 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I1229 06:58:51.401047   17440 command_runner.go:130] > [reset] Stopping the kubelet service
	I1229 06:58:51.408535   17440 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I1229 06:58:51.413813   17440 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I1229 06:58:51.415092   17440 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I1229 06:58:51.415117   17440 command_runner.go:130] > The reset process does not perform cleanup of CNI plugin configuration,
	I1229 06:58:51.415128   17440 command_runner.go:130] > network filtering rules and kubeconfig files.
	I1229 06:58:51.415137   17440 command_runner.go:130] > For information on how to perform this cleanup manually, please see:
	I1229 06:58:51.415145   17440 command_runner.go:130] >     https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
	I1229 06:58:51.415645   17440 command_runner.go:130] ! W1229 06:58:51.391426    8625 resetconfiguration.go:53] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1229 06:58:51.415670   17440 command_runner.go:130] ! W1229 06:58:51.392518    8625 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
	I1229 06:58:51.415739   17440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:58:51.432316   17440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:58:51.444836   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1229 06:58:51.444860   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1229 06:58:51.444867   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1229 06:58:51.444874   17440 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445417   17440 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:58:51.445435   17440 kubeadm.go:158] found existing configuration files:
	
	I1229 06:58:51.445485   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 06:58:51.457038   17440 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457099   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:58:51.457146   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:58:51.469980   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 06:58:51.480965   17440 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481435   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:58:51.481498   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:58:51.493408   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.504342   17440 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504404   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:58:51.504468   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:58:51.516567   17440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 06:58:51.526975   17440 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527475   17440 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:58:51.527532   17440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:58:51.539365   17440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 06:58:51.587038   17440 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587068   17440 command_runner.go:130] > [init] Using Kubernetes version: v1.35.0
	I1229 06:58:51.587108   17440 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:58:51.587113   17440 command_runner.go:130] > [preflight] Running pre-flight checks
	I1229 06:58:51.738880   17440 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738912   17440 command_runner.go:130] ! 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:58:51.738963   17440 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 06:58:51.738975   17440 command_runner.go:130] ! [preflight] Some fatal errors occurred:
	I1229 06:58:51.739029   17440 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739038   17440 command_runner.go:130] ! 	[ERROR Port-8441]: Port 8441 is in use
	I1229 06:58:51.739157   17440 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739166   17440 command_runner.go:130] ! [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 06:58:51.739271   17440 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739294   17440 command_runner.go:130] ! error: error execution phase preflight: preflight checks failed
	I1229 06:58:51.739348   17440 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739355   17440 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1229 06:58:51.739406   17440 kubeadm.go:403] duration metric: took 5m0.289116828s to StartCluster
	I1229 06:58:51.739455   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 06:58:51.739507   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 06:58:51.776396   17440 cri.go:96] found id: ""
	I1229 06:58:51.776420   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.776428   17440 logs.go:284] No container was found matching "kube-apiserver"
	I1229 06:58:51.776434   17440 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 06:58:51.776522   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 06:58:51.808533   17440 cri.go:96] found id: ""
	I1229 06:58:51.808556   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.808563   17440 logs.go:284] No container was found matching "etcd"
	I1229 06:58:51.808570   17440 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 06:58:51.808625   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 06:58:51.841860   17440 cri.go:96] found id: ""
	I1229 06:58:51.841887   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.841894   17440 logs.go:284] No container was found matching "coredns"
	I1229 06:58:51.841900   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 06:58:51.841955   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 06:58:51.875485   17440 cri.go:96] found id: ""
	I1229 06:58:51.875512   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.875520   17440 logs.go:284] No container was found matching "kube-scheduler"
	I1229 06:58:51.875526   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 06:58:51.875576   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 06:58:51.909661   17440 cri.go:96] found id: ""
	I1229 06:58:51.909699   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.909712   17440 logs.go:284] No container was found matching "kube-proxy"
	I1229 06:58:51.909720   17440 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 06:58:51.909790   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 06:58:51.943557   17440 cri.go:96] found id: ""
	I1229 06:58:51.943594   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.943607   17440 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 06:58:51.943616   17440 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 06:58:51.943685   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 06:58:51.979189   17440 cri.go:96] found id: ""
	I1229 06:58:51.979219   17440 logs.go:282] 0 containers: []
	W1229 06:58:51.979228   17440 logs.go:284] No container was found matching "kindnet"
	I1229 06:58:51.979234   17440 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 06:58:51.979285   17440 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 06:58:52.013436   17440 cri.go:96] found id: ""
	I1229 06:58:52.013472   17440 logs.go:282] 0 containers: []
	W1229 06:58:52.013482   17440 logs.go:284] No container was found matching "storage-provisioner"
	I1229 06:58:52.013494   17440 logs.go:123] Gathering logs for kubelet ...
	I1229 06:58:52.013507   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 06:58:52.030384   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.141703    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.030429   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.164789    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.030454   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: I1229 06:53:48.190793    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030481   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202029    2634 kuberuntime_manager.go:1961] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030506   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202077    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030530   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202095    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.030550   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202348    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030574   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202382    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030601   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202394    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-proxy-g7lp9"
	I1229 06:58:52.030643   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202436    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-g7lp9_kube-system(9c2c2ac1-7fa0-427d-b78e-ee14e169895a)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-proxy-g7lp9" podUID="9c2c2ac1-7fa0-427d-b78e-ee14e169895a"
	I1229 06:58:52.030670   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202695    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.030694   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202717    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"}
	I1229 06:58:52.030721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202737    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.030757   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202753    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f201ca-6d54-4e15-9584-396fb1486f3c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/storage-provisioner" podUID="b5f201ca-6d54-4e15-9584-396fb1486f3c"
	I1229 06:58:52.030787   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202781    2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF"
	I1229 06:58:52.030826   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202797    2634 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030853   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.202829    2634 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.030893   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203153    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\\\": rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.030921   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203201    2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = error reading from server: EOF" podSandboxID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.030943   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203215    2634 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"docker","ID":"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"}
	I1229 06:58:52.030981   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203229    2634 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\""
	I1229 06:58:52.031015   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.203240    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00a95e37-1394-45a7-a376-b195e31e3e9c\" with KillPodSandboxError: \"rpc error: code = Unavailable desc = error reading from server: EOF\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.031053   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.205108    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer"
	I1229 06:58:52.031087   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205291    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"ebc0dd45a3bf1e20d1e524935fd6129c\"}"
	I1229 06:58:52.031117   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205358    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="ebc0dd45a3bf1e20d1e524935fd6129c"
	I1229 06:58:52.031146   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205374    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031189   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205391    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.031223   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205442    2634 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" filter="label_selector:{key:\"io.kubernetes.pod.uid\"  value:\"5079d003096e0cf8214852718da6832c\"}"
	I1229 06:58:52.031253   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205472    2634 kuberuntime_sandbox.go:351] "Failed to list sandboxes for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031281   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205487    2634 generic.go:455] "PLEG: Write status" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031311   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: E1229 06:53:48.205502    2634 generic.go:300] "PLEG: Ignoring events for pod" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read unix @->/run/cri-dockerd.sock: read: connection reset by peer\"" pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.031347   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.306369    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031383   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.465709    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031422   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 kubelet[2634]: W1229 06:53:48.727775    2634 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/cri-dockerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	I1229 06:58:52.031445   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.224724    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.031467   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.225054    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031491   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.239349    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.031516   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.239613    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031538   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.260924    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.031562   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.262706    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031584   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: I1229 06:53:49.271403    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.031606   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 kubelet[2634]: E1229 06:53:49.272071    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031628   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.486082    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031651   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.527267    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031673   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.585714    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031695   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 kubelet[2634]: E1229 06:53:50.682419    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.031717   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 kubelet[2634]: E1229 06:53:51.994421    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.031738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.009282    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.031763   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.028514    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.031786   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: I1229 06:53:52.059063    2634 scope.go:122] "RemoveContainer" containerID="4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c"
	I1229 06:58:52.031824   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.061268    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.031855   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.136206    2634 controller.go:251] "Failed to update lease" err="Put \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.031894   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.348866    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.031949   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 kubelet[2634]: E1229 06:53:52.420977    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.031981   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.083455    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.032005   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099631    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.032025   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.099665    2634 scope.go:122] "RemoveContainer" containerID="14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697"
	I1229 06:58:52.032048   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.099823    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032069   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.114949    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.032093   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115125    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032112   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.115147    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032150   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.115570    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032170   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128002    2634 scope.go:122] "RemoveContainer" containerID="abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32"
	I1229 06:58:52.032192   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128620    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.032214   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.128846    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032234   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.128862    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032269   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.129184    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032290   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.146245    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032314   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.149274    2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.032335   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: I1229 06:53:53.158968    2634 scope.go:122] "RemoveContainer" containerID="bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159"
	I1229 06:58:52.032371   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 kubelet[2634]: E1229 06:53:53.483523    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032395   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.165031    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.032414   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.165425    2634 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.032452   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.166088    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-695625_kube-system(5079d003096e0cf8214852718da6832c)\"" pod="kube-system/kube-controller-manager-functional-695625" podUID="5079d003096e0cf8214852718da6832c"
	I1229 06:58:52.032473   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.177787    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032495   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.177811    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032530   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.178010    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032552   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190233    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032573   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: I1229 06:53:54.190259    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032608   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190388    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032631   17440 command_runner.go:130] > Dec 29 06:53:54 functional-695625 kubelet[2634]: E1229 06:53:54.190596    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032655   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.197650    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.032676   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198541    2634 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfq7m" containerName="coredns"
	I1229 06:58:52.032696   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: I1229 06:53:55.198579    2634 scope.go:122] "RemoveContainer" containerID="6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:58:52.032735   17440 command_runner.go:130] > Dec 29 06:53:55 functional-695625 kubelet[2634]: E1229 06:53:55.198854    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7d764666f9-wfq7m_kube-system(00a95e37-1394-45a7-a376-b195e31e3e9c)\"" pod="kube-system/coredns-7d764666f9-wfq7m" podUID="00a95e37-1394-45a7-a376-b195e31e3e9c"
	I1229 06:58:52.032819   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.114313    2634 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-29T06:53:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://28.5.2\\\"}}}\" for node \"functional-695625\": Patch \"https://192.168.39.121:8441/api/v1/nodes/functional-695625/status?timeout=10s\": net/http:
request canceled (Client.Timeout exceeded while awaiting headers)"
	I1229 06:58:52.032845   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.650698    2634 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.032864   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.650771    2634 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.032899   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: E1229 06:53:58.651066    2634 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.032919   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 kubelet[2634]: I1229 06:53:58.808551    2634 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.032935   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.032948   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.032960   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.032981   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 systemd[1]: kubelet.service: Consumed 2.468s CPU time, 33.6M memory peak.
	I1229 06:58:52.032995   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.033012   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045324    6517 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
	I1229 06:58:52.033029   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045661    6517 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:58:52.033042   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045707    6517 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	I1229 06:58:52.033062   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.045732    6517 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I1229 06:58:52.033080   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.046147    6517 server.go:951] "Client rotation is on, will bootstrap in background"
	I1229 06:58:52.033101   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.047668    6517 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	I1229 06:58:52.033120   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.050807    6517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1229 06:58:52.033138   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.062385    6517 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
	I1229 06:58:52.033166   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066238    6517 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
	I1229 06:58:52.033187   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066279    6517 server.go:836] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1229 06:58:52.033206   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066453    6517 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1229 06:58:52.033274   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066538    6517 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"functional-695625","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"no
ne","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1229 06:58:52.033294   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066683    6517 topology_manager.go:143] "Creating topology manager with none policy"
	I1229 06:58:52.033309   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066691    6517 container_manager_linux.go:308] "Creating device plugin manager"
	I1229 06:58:52.033326   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066715    6517 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager"
	I1229 06:58:52.033343   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.066977    6517 state_mem.go:41] "Initialized" logger="CPUManager state memory"
	I1229 06:58:52.033359   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067273    6517 kubelet.go:482] "Attempting to sync node with API server"
	I1229 06:58:52.033378   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067303    6517 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1229 06:58:52.033398   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067321    6517 kubelet.go:394] "Adding apiserver pod source"
	I1229 06:58:52.033413   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.067339    6517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1229 06:58:52.033431   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.071645    6517 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="docker" version="28.5.2" apiVersion="v1"
	I1229 06:58:52.033453   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072532    6517 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	I1229 06:58:52.033476   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.072614    6517 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	I1229 06:58:52.033492   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.079617    6517 server.go:1257] "Started kubelet"
	I1229 06:58:52.033507   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.087576    6517 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer"
	I1229 06:58:52.033526   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.098777    6517 scope.go:122] "RemoveContainer" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033542   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.106373    6517 server.go:182] "Starting to listen" address="0.0.0.0" port=10250
	I1229 06:58:52.033559   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.109848    6517 server.go:317] "Adding debug handlers to kubelet server"
	I1229 06:58:52.033609   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117444    6517 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1229 06:58:52.033625   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117555    6517 server_v1.go:49] "podresources" method="list" useActivePods=true
	I1229 06:58:52.033642   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.117716    6517 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1229 06:58:52.033665   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.118699    6517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1229 06:58:52.033681   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119167    6517 volume_manager.go:311] "Starting Kubelet Volume Manager"
	I1229 06:58:52.033700   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.119433    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033718   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.119972    6517 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1229 06:58:52.033734   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.120370    6517 reconciler.go:29] "Reconciler: start to sync state"
	I1229 06:58:52.033751   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.130418    6517 factory.go:223] Registration of the systemd container factory successfully
	I1229 06:58:52.033776   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.131188    6517 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1229 06:58:52.033808   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.141029    6517 factory.go:223] Registration of the containerd container factory successfully
	I1229 06:58:52.033826   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183036    6517 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4"
	I1229 06:58:52.033840   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183220    6517 status_manager.go:249] "Starting to sync pod status with apiserver"
	I1229 06:58:52.033855   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.183330    6517 kubelet.go:2501] "Starting kubelet main sync loop"
	I1229 06:58:52.033878   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.183444    6517 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I1229 06:58:52.033905   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.221428    6517 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"functional-695625\" not found"
	I1229 06:58:52.033937   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.242700    6517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd" containerID="fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033974   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.243294    6517 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = Unknown desc = Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:58:52.033993   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269160    6517 cpu_manager.go:225] "Starting" policy="none"
	I1229 06:58:52.034010   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269189    6517 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s"
	I1229 06:58:52.034030   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269211    6517 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory"
	I1229 06:58:52.034050   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269368    6517 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet=""
	I1229 06:58:52.034084   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269407    6517 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={}
	I1229 06:58:52.034099   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269429    6517 policy_none.go:50] "Start"
	I1229 06:58:52.034116   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269440    6517 memory_manager.go:187] "Starting memorymanager" policy="None"
	I1229 06:58:52.034134   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269450    6517 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034152   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.269563    6517 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint"
	I1229 06:58:52.034167   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.270193    6517 policy_none.go:44] "Start"
	I1229 06:58:52.034186   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.280697    6517 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint"
	I1229 06:58:52.034203   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282209    6517 eviction_manager.go:194] "Eviction manager: starting control loop"
	I1229 06:58:52.034224   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282399    6517 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1229 06:58:52.034241   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.282694    6517 plugin_manager.go:121] "Starting Kubelet Plugin Manager"
	I1229 06:58:52.034265   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.285700    6517 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I1229 06:58:52.034286   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.286000    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034308   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.290189    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034332   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.296210    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034358   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296213    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8"
	I1229 06:58:52.034380   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296423    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6"
	I1229 06:58:52.034404   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296509    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd"
	I1229 06:58:52.034427   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296522    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd"
	I1229 06:58:52.034450   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296659    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3"
	I1229 06:58:52.034472   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.296736    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7"
	I1229 06:58:52.034499   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.298291    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034521   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.300783    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c"
	I1229 06:58:52.034544   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.307864    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1"
	I1229 06:58:52.034566   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327004    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784"
	I1229 06:58:52.034588   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.327039    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780"
	I1229 06:58:52.034611   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.337430    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd"
	I1229 06:58:52.034633   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338584    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cc8048f6d9ff1df7ba90196f828ce8838881d8a6049d1e2f085d13b40a3a71"
	I1229 06:58:52.034655   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.338603    6517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263"
	I1229 06:58:52.034678   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: E1229 06:54:00.339318    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.034697   17440 command_runner.go:130] > Dec 29 06:54:00 functional-695625 kubelet[6517]: I1229 06:54:00.384315    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.034724   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.121079    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="200ms"
	I1229 06:58:52.034749   17440 command_runner.go:130] > Dec 29 06:54:10 functional-695625 kubelet[6517]: E1229 06:54:10.286789    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034771   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.288099    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034819   17440 command_runner.go:130] > Dec 29 06:54:20 functional-695625 kubelet[6517]: E1229 06:54:20.322920    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	I1229 06:58:52.034843   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.289381    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.034873   17440 command_runner.go:130] > Dec 29 06:54:30 functional-695625 kubelet[6517]: E1229 06:54:30.724518    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	I1229 06:58:52.034936   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.088119    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bc22bb49a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,LastTimestamp:2025-12-29 06:54:00.079586458 +0000 UTC m=+0.095335847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.034963   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: E1229 06:54:34.387607    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.034993   17440 command_runner.go:130] > Dec 29 06:54:34 functional-695625 kubelet[6517]: I1229 06:54:34.589687    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035018   17440 command_runner.go:130] > Dec 29 06:54:40 functional-695625 kubelet[6517]: E1229 06:54:40.289653    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035049   17440 command_runner.go:130] > Dec 29 06:54:41 functional-695625 kubelet[6517]: E1229 06:54:41.525961    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	I1229 06:58:52.035071   17440 command_runner.go:130] > Dec 29 06:54:50 functional-695625 kubelet[6517]: E1229 06:54:50.290623    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035099   17440 command_runner.go:130] > Dec 29 06:54:53 functional-695625 kubelet[6517]: E1229 06:54:53.127043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="3.2s"
	I1229 06:58:52.035126   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.123055    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.035159   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223407    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-ca-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035194   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.223452    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-usr-share-ca-certificates\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035228   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224254    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-flexvolume-dir\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035263   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224286    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-k8s-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035299   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224307    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebc0dd45a3bf1e20d1e524935fd6129c-kubeconfig\") pod \"kube-scheduler-functional-695625\" (UID: \"ebc0dd45a3bf1e20d1e524935fd6129c\") " pod="kube-system/kube-scheduler-functional-695625"
	I1229 06:58:52.035333   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224328    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d173c000af26dcef62569d3a5345fcae-k8s-certs\") pod \"kube-apiserver-functional-695625\" (UID: \"d173c000af26dcef62569d3a5345fcae\") " pod="kube-system/kube-apiserver-functional-695625"
	I1229 06:58:52.035368   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224346    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-ca-certs\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035408   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224360    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-kubeconfig\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035445   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224377    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5079d003096e0cf8214852718da6832c-usr-share-ca-certificates\") pod \"kube-controller-manager-functional-695625\" (UID: \"5079d003096e0cf8214852718da6832c\") " pod="kube-system/kube-controller-manager-functional-695625"
	I1229 06:58:52.035477   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224432    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-certs\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035512   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: I1229 06:55:00.224449    6517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8da5c6c8980da2ca920a502b6f312384-etcd-data\") pod \"etcd-functional-695625\" (UID: \"8da5c6c8980da2ca920a502b6f312384\") " pod="kube-system/etcd-functional-695625"
	I1229 06:58:52.035534   17440 command_runner.go:130] > Dec 29 06:55:00 functional-695625 kubelet[6517]: E1229 06:55:00.291332    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035563   17440 command_runner.go:130] > Dec 29 06:55:06 functional-695625 kubelet[6517]: E1229 06:55:06.329330    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io functional-695625)" interval="6.4s"
	I1229 06:58:52.035631   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.090561    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035658   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: E1229 06:55:08.592540    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035677   17440 command_runner.go:130] > Dec 29 06:55:08 functional-695625 kubelet[6517]: I1229 06:55:08.994308    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035699   17440 command_runner.go:130] > Dec 29 06:55:10 functional-695625 kubelet[6517]: E1229 06:55:10.291711    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035720   17440 command_runner.go:130] > Dec 29 06:55:20 functional-695625 kubelet[6517]: E1229 06:55:20.292793    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035749   17440 command_runner.go:130] > Dec 29 06:55:22 functional-695625 kubelet[6517]: E1229 06:55:22.729733    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035771   17440 command_runner.go:130] > Dec 29 06:55:30 functional-695625 kubelet[6517]: E1229 06:55:30.293859    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035814   17440 command_runner.go:130] > Dec 29 06:55:39 functional-695625 kubelet[6517]: E1229 06:55:39.730496    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.035838   17440 command_runner.go:130] > Dec 29 06:55:40 functional-695625 kubelet[6517]: E1229 06:55:40.294978    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.035902   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.093022    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.035927   17440 command_runner.go:130] > Dec 29 06:55:42 functional-695625 kubelet[6517]: E1229 06:55:42.996721    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.035947   17440 command_runner.go:130] > Dec 29 06:55:43 functional-695625 kubelet[6517]: I1229 06:55:43.798535    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.035978   17440 command_runner.go:130] > Dec 29 06:55:50 functional-695625 kubelet[6517]: E1229 06:55:50.295990    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036010   17440 command_runner.go:130] > Dec 29 06:55:56 functional-695625 kubelet[6517]: E1229 06:55:56.732252    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.036038   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.228455    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.036061   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: E1229 06:56:00.296294    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036082   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.339811    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036102   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.590728    6517 scope.go:122] "RemoveContainer" containerID="d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:58:52.036121   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 kubelet[6517]: I1229 06:56:00.596576    6517 scope.go:122] "RemoveContainer" containerID="17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:58:52.036141   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.196928    6517 scope.go:122] "RemoveContainer" containerID="fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:58:52.036165   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199564    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036190   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199638    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036212   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: I1229 06:56:01.199656    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036251   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.199813    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036275   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.211732    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036299   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.212086    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036323   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226269    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036345   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226760    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036369   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226846    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036393   17440 command_runner.go:130] > Dec 29 06:56:01 functional-695625 kubelet[6517]: E1229 06:56:01.226932    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036418   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240397    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036441   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.036464   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240759    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036488   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.240798    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.036511   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241099    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036536   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241133    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036561   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241440    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036584   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241482    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036606   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: I1229 06:56:02.241498    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036642   17440 command_runner.go:130] > Dec 29 06:56:02 functional-695625 kubelet[6517]: E1229 06:56:02.241585    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036664   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246390    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036687   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246454    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.036711   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246667    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036734   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246717    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036754   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: I1229 06:56:03.246732    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.036806   17440 command_runner.go:130] > Dec 29 06:56:03 functional-695625 kubelet[6517]: E1229 06:56:03.246832    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.036895   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.297136    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.036922   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342375    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.036945   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342456    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.036973   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: I1229 06:56:10.342477    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037009   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.342670    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037032   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593708    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037052   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.593770    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037076   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598591    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037098   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.598652    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037122   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606502    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037144   17440 command_runner.go:130] > Dec 29 06:56:10 functional-695625 kubelet[6517]: E1229 06:56:10.606600    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.037168   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302101    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037189   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302675    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037212   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302176    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037235   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302763    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037254   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: I1229 06:56:11.302780    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037278   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302307    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037303   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 kubelet[6517]: E1229 06:56:11.302816    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.037325   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.321043    6517 scope.go:122] "RemoveContainer" containerID="78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac"
	I1229 06:58:52.037348   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.321965    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037372   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322030    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037392   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: I1229 06:56:12.322044    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037424   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.322163    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037449   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323008    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037472   17440 command_runner.go:130] > Dec 29 06:56:12 functional-695625 kubelet[6517]: E1229 06:56:12.323148    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.037497   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037518   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336097    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037539   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: I1229 06:56:13.336114    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037574   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.336243    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037604   17440 command_runner.go:130] > Dec 29 06:56:13 functional-695625 kubelet[6517]: E1229 06:56:13.733654    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.037669   17440 command_runner.go:130] > Dec 29 06:56:16 functional-695625 kubelet[6517]: E1229 06:56:16.095560    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.037694   17440 command_runner.go:130] > Dec 29 06:56:17 functional-695625 kubelet[6517]: E1229 06:56:17.801052    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.037713   17440 command_runner.go:130] > Dec 29 06:56:19 functional-695625 kubelet[6517]: I1229 06:56:19.403026    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.037734   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.297746    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.037760   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342467    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037784   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342554    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037816   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.342589    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037851   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.342829    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037875   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.385984    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.037897   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386062    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.037917   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: I1229 06:56:20.386078    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.037950   17440 command_runner.go:130] > Dec 29 06:56:20 functional-695625 kubelet[6517]: E1229 06:56:20.386220    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.037981   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.298955    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038011   17440 command_runner.go:130] > Dec 29 06:56:30 functional-695625 kubelet[6517]: E1229 06:56:30.734998    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038035   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185639    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038059   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.185732    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038079   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.185750    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038102   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493651    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038125   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493733    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038147   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: I1229 06:56:32.493755    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038182   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 kubelet[6517]: E1229 06:56:32.493996    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038203   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.510294    6517 scope.go:122] "RemoveContainer" containerID="18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb"
	I1229 06:58:52.038223   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511464    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038243   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511520    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038260   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: I1229 06:56:33.511535    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038297   17440 command_runner.go:130] > Dec 29 06:56:33 functional-695625 kubelet[6517]: E1229 06:56:33.511684    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038321   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525404    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038344   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525467    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038365   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: I1229 06:56:34.525482    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038401   17440 command_runner.go:130] > Dec 29 06:56:34 functional-695625 kubelet[6517]: E1229 06:56:34.525663    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038423   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.300040    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038449   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342011    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038471   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342082    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038491   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.342099    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038526   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.342223    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038549   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567456    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.038585   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.567665    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.038608   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: I1229 06:56:40.567686    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.038643   17440 command_runner.go:130] > Dec 29 06:56:40 functional-695625 kubelet[6517]: E1229 06:56:40.568152    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.038670   17440 command_runner.go:130] > Dec 29 06:56:47 functional-695625 kubelet[6517]: E1229 06:56:47.736964    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.038735   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.098168    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.27202431 +0000 UTC m=+0.287773690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.038758   17440 command_runner.go:130] > Dec 29 06:56:50 functional-695625 kubelet[6517]: E1229 06:56:50.300747    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038785   17440 command_runner.go:130] > Dec 29 06:56:53 functional-695625 kubelet[6517]: E1229 06:56:53.405155    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.038817   17440 command_runner.go:130] > Dec 29 06:56:56 functional-695625 kubelet[6517]: I1229 06:56:56.606176    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.038842   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.301915    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038869   17440 command_runner.go:130] > Dec 29 06:57:00 functional-695625 kubelet[6517]: E1229 06:57:00.330173    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.038900   17440 command_runner.go:130] > Dec 29 06:57:04 functional-695625 kubelet[6517]: E1229 06:57:04.738681    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.038922   17440 command_runner.go:130] > Dec 29 06:57:10 functional-695625 kubelet[6517]: E1229 06:57:10.302083    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038946   17440 command_runner.go:130] > Dec 29 06:57:20 functional-695625 kubelet[6517]: E1229 06:57:20.302612    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.038977   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185645    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039003   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.185704    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.039034   17440 command_runner.go:130] > Dec 29 06:57:21 functional-695625 kubelet[6517]: E1229 06:57:21.740062    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039059   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.185952    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039082   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.186017    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039102   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.186034    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039126   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.873051    6517 scope.go:122] "RemoveContainer" containerID="0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec"
	I1229 06:58:52.039149   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874264    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039171   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874357    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039191   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: I1229 06:57:22.874375    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039227   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 kubelet[6517]: E1229 06:57:22.874499    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039252   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892021    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039275   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892083    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039295   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: I1229 06:57:23.892098    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039330   17440 command_runner.go:130] > Dec 29 06:57:23 functional-695625 kubelet[6517]: E1229 06:57:23.892218    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039396   17440 command_runner.go:130] > Dec 29 06:57:24 functional-695625 kubelet[6517]: E1229 06:57:24.100978    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc794297  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-695625 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252441239 +0000 UTC m=+0.268190608,LastTimestamp:2025-12-29 06:54:00.27223373 +0000 UTC m=+0.287983111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039419   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.302837    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039444   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.341968    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039468   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342033    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.039488   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: I1229 06:57:30.342050    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.039523   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.342233    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.039550   17440 command_runner.go:130] > Dec 29 06:57:30 functional-695625 kubelet[6517]: E1229 06:57:30.608375    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.039576   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186377    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039598   17440 command_runner.go:130] > Dec 29 06:57:32 functional-695625 kubelet[6517]: E1229 06:57:32.186459    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.039675   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188187    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.039700   17440 command_runner.go:130] > Dec 29 06:57:33 functional-695625 kubelet[6517]: E1229 06:57:33.188267    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-695625" containerName="kube-controller-manager"
	I1229 06:58:52.039715   17440 command_runner.go:130] > Dec 29 06:57:37 functional-695625 kubelet[6517]: I1229 06:57:37.010219    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.039749   17440 command_runner.go:130] > Dec 29 06:57:38 functional-695625 kubelet[6517]: E1229 06:57:38.741770    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.039773   17440 command_runner.go:130] > Dec 29 06:57:40 functional-695625 kubelet[6517]: E1229 06:57:40.303258    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039808   17440 command_runner.go:130] > Dec 29 06:57:50 functional-695625 kubelet[6517]: E1229 06:57:50.304120    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039837   17440 command_runner.go:130] > Dec 29 06:57:55 functional-695625 kubelet[6517]: E1229 06:57:55.743031    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": context deadline exceeded" interval="7s"
	I1229 06:58:52.039903   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 kubelet[6517]: E1229 06:57:58.103052    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc7b9ae7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-695625 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252594919 +0000 UTC m=+0.268344289,LastTimestamp:2025-12-29 06:54:00.272240811 +0000 UTC m=+0.287990191,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.039929   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.304627    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.039954   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432518    6517 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)"
	I1229 06:58:52.039991   17440 command_runner.go:130] > Dec 29 06:58:00 functional-695625 kubelet[6517]: E1229 06:58:00.432667    6517 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes functional-695625)
	I1229 06:58:52.040014   17440 command_runner.go:130] > Dec 29 06:58:10 functional-695625 kubelet[6517]: E1229 06:58:10.305485    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040037   17440 command_runner.go:130] > Dec 29 06:58:11 functional-695625 kubelet[6517]: E1229 06:58:11.012407    6517 kubelet_node_status.go:106] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="functional-695625"
	I1229 06:58:52.040068   17440 command_runner.go:130] > Dec 29 06:58:12 functional-695625 kubelet[6517]: E1229 06:58:12.743824    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040086   17440 command_runner.go:130] > Dec 29 06:58:18 functional-695625 kubelet[6517]: I1229 06:58:18.014210    6517 kubelet_node_status.go:74] "Attempting to register node" node="functional-695625"
	I1229 06:58:52.040107   17440 command_runner.go:130] > Dec 29 06:58:20 functional-695625 kubelet[6517]: E1229 06:58:20.306630    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040127   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186554    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040149   17440 command_runner.go:130] > Dec 29 06:58:24 functional-695625 kubelet[6517]: E1229 06:58:24.186719    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-695625" containerName="kube-scheduler"
	I1229 06:58:52.040176   17440 command_runner.go:130] > Dec 29 06:58:29 functional-695625 kubelet[6517]: E1229 06:58:29.745697    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040195   17440 command_runner.go:130] > Dec 29 06:58:30 functional-695625 kubelet[6517]: E1229 06:58:30.307319    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040256   17440 command_runner.go:130] > Dec 29 06:58:32 functional-695625 kubelet[6517]: E1229 06:58:32.105206    6517 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{functional-695625.18859d2bcc791058  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-695625,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-695625 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-695625,},FirstTimestamp:2025-12-29 06:54:00.252428376 +0000 UTC m=+0.268177748,LastTimestamp:2025-12-29 06:54:00.286010652 +0000 UTC m=+0.301760032,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-695625,}"
	I1229 06:58:52.040279   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184790    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040300   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.184918    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040319   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: I1229 06:58:39.184949    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040354   17440 command_runner.go:130] > Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040377   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040397   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	I1229 06:58:52.040413   17440 command_runner.go:130] > Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	I1229 06:58:52.040433   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040455   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040477   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040498   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040520   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040538   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040576   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040596   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	I1229 06:58:52.040619   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040640   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040658   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040692   17440 command_runner.go:130] > Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040711   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	I1229 06:58:52.040729   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	I1229 06:58:52.040741   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	I1229 06:58:52.040764   17440 command_runner.go:130] > Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	I1229 06:58:52.040784   17440 command_runner.go:130] > Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	I1229 06:58:52.040807   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I1229 06:58:52.040815   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	I1229 06:58:52.040821   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1229 06:58:52.040830   17440 command_runner.go:130] > Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	I1229 06:58:52.093067   17440 logs.go:123] Gathering logs for dmesg ...
	I1229 06:58:52.093106   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 06:58:52.108863   17440 command_runner.go:130] > [Dec29 06:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	I1229 06:58:52.108898   17440 command_runner.go:130] > [  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	I1229 06:58:52.108912   17440 command_runner.go:130] > [  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1229 06:58:52.108925   17440 command_runner.go:130] > [  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	I1229 06:58:52.108937   17440 command_runner.go:130] > [  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	I1229 06:58:52.108945   17440 command_runner.go:130] > [  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1229 06:58:52.108951   17440 command_runner.go:130] > [  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1229 06:58:52.108957   17440 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I1229 06:58:52.108962   17440 command_runner.go:130] > [  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	I1229 06:58:52.108971   17440 command_runner.go:130] > [  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	I1229 06:58:52.108975   17440 command_runner.go:130] > [  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	I1229 06:58:52.108980   17440 command_runner.go:130] > [  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	I1229 06:58:52.108992   17440 command_runner.go:130] > [  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.108997   17440 command_runner.go:130] > [  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	I1229 06:58:52.109006   17440 command_runner.go:130] > [Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	I1229 06:58:52.109011   17440 command_runner.go:130] > [ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	I1229 06:58:52.109021   17440 command_runner.go:130] > [  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109031   17440 command_runner.go:130] > [  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	I1229 06:58:52.109036   17440 command_runner.go:130] > [  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	I1229 06:58:52.109043   17440 command_runner.go:130] > [  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	I1229 06:58:52.109048   17440 command_runner.go:130] > [  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	I1229 06:58:52.109055   17440 command_runner.go:130] > [Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	I1229 06:58:52.109062   17440 command_runner.go:130] > [ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	I1229 06:58:52.109067   17440 command_runner.go:130] > [ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109072   17440 command_runner.go:130] > [Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109080   17440 command_runner.go:130] > [Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109088   17440 command_runner.go:130] > [  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	I1229 06:58:52.109931   17440 logs.go:123] Gathering logs for describe nodes ...
	I1229 06:58:52.109946   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 06:59:52.193646   17440 command_runner.go:130] ! Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I1229 06:59:52.193695   17440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.083736259s)
	W1229 06:59:52.193730   17440 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 06:59:52.193743   17440 logs.go:123] Gathering logs for Docker ...
	I1229 06:59:52.193757   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 06:59:52.211424   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211464   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.211503   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.211519   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.211538   17440 command_runner.go:130] > Dec 29 06:52:21 minikube cri-dockerd[372]: time="2025-12-29T06:52:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1229 06:59:52.211555   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1229 06:59:52.211569   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1229 06:59:52.211587   17440 command_runner.go:130] > Dec 29 06:52:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211601   17440 command_runner.go:130] > Dec 29 06:52:22 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.211612   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.020462163Z" level=info msg="Starting up"
	I1229 06:59:52.211630   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.027928346Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.211652   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028129610Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.211672   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.028144703Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.211696   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.043277940Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.211714   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.068992169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.211730   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.093451498Z" level=info msg="Loading containers: start."
	I1229 06:59:52.211773   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.245820420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.211790   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.354124488Z" level=info msg="Loading containers: done."
	I1229 06:59:52.211824   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.369556904Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.211841   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.370022229Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.211855   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1229 06:59:52.211871   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.211884   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.429481151Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.211899   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437135480Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.211913   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437228150Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.211926   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437499736Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.211948   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 dockerd[618]: time="2025-12-29T06:52:23.437545942Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.211959   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.211970   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.211984   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212011   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212025   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212039   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Hairpin mode is set to none"
	I1229 06:59:52.212064   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212079   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212093   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212108   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212125   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212139   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 cri-dockerd[823]: time="2025-12-29T06:52:23Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212152   17440 command_runner.go:130] > Dec 29 06:52:23 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212172   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250034276Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.212192   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250065025Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.212215   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250432086Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212237   17440 command_runner.go:130] > Dec 29 06:52:24 functional-695625 dockerd[618]: time="2025-12-29T06:52:24.250448972Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.212252   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212266   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.004793725Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212285   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006609373Z" level=warning msg="Error while testing if containerd API is ready" error="Canceled: grpc: the client connection is closing"
	I1229 06:59:52.212301   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[618]: time="2025-12-29T06:52:25.006865498Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.212316   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.212331   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.212341   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.212357   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.072059214Z" level=info msg="Starting up"
	I1229 06:59:52.212372   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079212056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.212392   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079317481Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.212423   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.079333267Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.212444   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.100712562Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.212461   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.111060819Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.212477   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.125644752Z" level=info msg="Loading containers: start."
	I1229 06:59:52.212512   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.269806698Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.212529   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.403684326Z" level=info msg="Loading containers: done."
	I1229 06:59:52.212547   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419740189Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.212562   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.419840379Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.212577   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.440865810Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.212594   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.450796825Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.212612   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451233366Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.212628   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451367379Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.212643   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 dockerd[1144]: time="2025-12-29T06:52:25.451393479Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.212656   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.212671   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212684   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.212699   17440 command_runner.go:130] > Dec 29 06:52:25 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212714   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.212732   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.212751   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.212767   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.212783   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.212808   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.212827   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.212844   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.212864   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.212881   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.212899   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:26Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.212916   17440 command_runner.go:130] > Dec 29 06:52:26 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.212932   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.212949   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.014018901Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.212974   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.015980570Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1229 06:59:52.212995   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1144]: time="2025-12-29T06:52:29.016658114Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213006   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213020   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213033   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213055   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.080172805Z" level=info msg="Starting up"
	I1229 06:59:52.213073   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087153730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213094   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087606870Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213115   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.087791007Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213135   17440 command_runner.go:130] > Dec 29 06:52:29 functional-695625 dockerd[1647]: time="2025-12-29T06:52:29.102104328Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213153   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.438808405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213169   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.712758412Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213204   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.850108278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.213221   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.981771558Z" level=info msg="Loading containers: done."
	I1229 06:59:52.213242   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997281457Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.213258   17440 command_runner.go:130] > Dec 29 06:52:30 functional-695625 dockerd[1647]: time="2025-12-29T06:52:30.997336373Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.213275   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.018270012Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.213291   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.027948102Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.213308   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028167710Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.213321   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028236879Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.213334   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 dockerd[1647]: time="2025-12-29T06:52:31.028260561Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.213348   17440 command_runner.go:130] > Dec 29 06:52:31 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.213387   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213414   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213440   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213465   17440 command_runner.go:130] > Dec 29 06:52:35 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213486   17440 command_runner.go:130] > Dec 29 06:52:44 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1229 06:59:52.213507   17440 command_runner.go:130] > Dec 29 06:52:46 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213528   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213549   17440 command_runner.go:130] > Dec 29 06:52:47 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213573   17440 command_runner.go:130] > Dec 29 06:52:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:52:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.213595   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.687270343Z" level=info msg="ignoring event" container=67027578cf0b79235004d7cd10841e25caaf8524e01d9d37b1cacadb486ee23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213616   17440 command_runner.go:130] > Dec 29 06:52:53 functional-695625 dockerd[1647]: time="2025-12-29T06:52:53.834054505Z" level=info msg="ignoring event" container=82ebbec1e21171232319e14e7521b1318f7a15d9862e1f988ba0a6f37b46d605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213637   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154228197Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213655   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154272599Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
	I1229 06:59:52.213675   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154382560Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
	I1229 06:59:52.213697   17440 command_runner.go:130] > Dec 29 06:53:24 functional-695625 dockerd[1647]: time="2025-12-29T06:53:24.154394909Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	I1229 06:59:52.213709   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 systemd[1]: Stopping Docker Application Container Engine...
	I1229 06:59:52.213724   17440 command_runner.go:130] > Dec 29 06:53:25 functional-695625 dockerd[1647]: time="2025-12-29T06:53:25.157393741Z" level=info msg="Processing signal 'terminated'"
	I1229 06:59:52.213735   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.159560262Z" level=error msg="Force shutdown daemon"
	I1229 06:59:52.213749   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[1647]: time="2025-12-29T06:53:40.160035445Z" level=info msg="Daemon shutdown complete"
	I1229 06:59:52.213759   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Deactivated successfully.
	I1229 06:59:52.213774   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Stopped Docker Application Container Engine.
	I1229 06:59:52.213786   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: docker.service: Consumed 2.138s CPU time, 29.7M memory peak.
	I1229 06:59:52.213809   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 systemd[1]: Starting Docker Application Container Engine...
	I1229 06:59:52.213822   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.286623538Z" level=info msg="Starting up"
	I1229 06:59:52.213839   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295291170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	I1229 06:59:52.213856   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295480841Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	I1229 06:59:52.213874   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.295496671Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	I1229 06:59:52.213891   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.316635284Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	I1229 06:59:52.213907   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.328807793Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1229 06:59:52.213920   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.355375449Z" level=info msg="Loading containers: start."
	I1229 06:59:52.213942   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.681285713Z" level=info msg="ignoring event" container=5024b03252e39eed8a6ab1319b6386d9a846197175f5c2da843e4c5a390148b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213963   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.755492465Z" level=info msg="ignoring event" container=bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.213985   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.762530714Z" level=info msg="ignoring event" container=64853b50a6c5eae8b7f7796881dd851ed605b45dffe935eb82f288f18c60b24c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214006   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.775670003Z" level=info msg="ignoring event" container=0af491ef7c2f1a8312ee1c51bc20f44ec02abcc65665902a7fb5e969f770e6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214028   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.794654459Z" level=info msg="ignoring event" container=8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214055   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.801655844Z" level=info msg="ignoring event" container=548561c7ada8f895644c9b6b62d6e0a4034da8d3d80b4858670645e21d82b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214078   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828715029Z" level=info msg="ignoring event" container=ad82b94f76293fed55cae621a103b8910667dd22aa9809da79dec1ae4d921263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214099   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.828769092Z" level=info msg="ignoring event" container=a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214122   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.845767487Z" level=info msg="ignoring event" container=abbe46bd960e767cec61bab1a2010c730c247bbaffec2c7d29d32dbef73e8a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214144   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.864343925Z" level=info msg="ignoring event" container=fe7b5da2f7fb57e50d28df32820adefc7c25530e6e48a5b6d53880680dc58dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214166   17440 command_runner.go:130] > Dec 29 06:53:40 functional-695625 dockerd[4014]: time="2025-12-29T06:53:40.865774071Z" level=info msg="ignoring event" container=14aafc386533fecd8b99ec2f19f14752ed432bb1a70922f0cd34af8756fea697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214190   17440 command_runner.go:130] > Dec 29 06:53:45 functional-695625 dockerd[4014]: time="2025-12-29T06:53:45.656598076Z" level=info msg="ignoring event" container=bd7d900efd487bc7b939fa3b0d25d19771212cf2b966bd0006a6316dc04f5159 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214211   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.636734672Z" level=info msg="ignoring event" container=fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.214242   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.811417108Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1229 06:59:52.214258   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.854503584Z" level=info msg="Removing stale sandbox" cid=a123d63a8edb isRestore=false sid=bee98e10184c
	I1229 06:59:52.214283   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.857444846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 155f23c2cd353f99747cbbed5071c374427d34acfe358ab2da9489f0ecc6dd58 20989221f5da3e18159e9875a44d6ffa354887adacc49a282cdee70b58f0dd06], retrying...."
	I1229 06:59:52.214298   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.873316567Z" level=info msg="Removing stale sandbox" cid=0af491ef7c2f isRestore=false sid=043bbf7592a3
	I1229 06:59:52.214323   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.875334227Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 abd499ef79857402bb9465b07e26fb0f75693045ea6a45283c4a1a4b13da7c92], retrying...."
	I1229 06:59:52.214341   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.887452986Z" level=info msg="Removing stale sandbox" cid=ad82b94f7629 isRestore=false sid=4ae81a2c92d8
	I1229 06:59:52.214365   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.890633879Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 ccabc3ec6c0d337909f3a6bfccd1999d5ddec500f785c46c7c1173bb9f142a4d], retrying...."
	I1229 06:59:52.214380   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.896180450Z" level=info msg="Removing stale sandbox" cid=5024b03252e3 isRestore=false sid=4f7be10df8fc
	I1229 06:59:52.214405   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.898438145Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 0e272d35a10e432b889f2a3f6f048225031acf42b0444ba6b0cc9339f3cb374f], retrying...."
	I1229 06:59:52.214421   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.903187461Z" level=info msg="Removing stale sandbox" cid=64853b50a6c5 isRestore=false sid=826a3dc204ef
	I1229 06:59:52.214447   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.905271147Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 67cd3d4378e987242bd34247eace592097542682b6c3f23a5a478422e9bfbb3b], retrying...."
	I1229 06:59:52.214464   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.910152629Z" level=info msg="Removing stale sandbox" cid=548561c7ada8 isRestore=false sid=94281ce70a77
	I1229 06:59:52.214489   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.911967707Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 b513626d489ab85e12802c06e57f2ac0b0298434467c73d2846152ca9481eeae], retrying...."
	I1229 06:59:52.214506   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.917235829Z" level=info msg="Removing stale sandbox" cid=fe7b5da2f7fb isRestore=false sid=b5e6c523a381
	I1229 06:59:52.214531   17440 command_runner.go:130] > Dec 29 06:53:47 functional-695625 dockerd[4014]: time="2025-12-29T06:53:47.919265802Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5e51e23da1f1530469b268437db9bee9625cb9f876916ad030013651a498c4a9 bef3c0f56e910ab0a1a698f2eb08c97229abee2b90bf53ab9119cbdba3cb6eaa], retrying...."
	I1229 06:59:52.214553   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022090385Z" level=warning msg="error locating sandbox id 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3: sandbox 043bbf7592a30562c3a5db5f6adef7320600a25484c541cc4623be026465ffa3 not found"
	I1229 06:59:52.214576   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022147638Z" level=warning msg="error locating sandbox id 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48: sandbox 826a3dc204efcd2a53685e64193c7854d206da1f4b9d3191ff4310e7fa397f48 not found"
	I1229 06:59:52.214600   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022162233Z" level=warning msg="error locating sandbox id 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6: sandbox 4ae81a2c92d8455752f7797b351baf4df03723964818db511d20f34eebee79e6 not found"
	I1229 06:59:52.214623   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022177741Z" level=warning msg="error locating sandbox id 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e: sandbox 94281ce70a77af2abe1d9e184f9e465429cc20d573c966349f11864787414d7e not found"
	I1229 06:59:52.214646   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022193375Z" level=warning msg="error locating sandbox id bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc: sandbox bee98e10184cba7e709f260e6b261c84d9c7e3c73d28f43d4a0e8856c6c40bcc not found"
	I1229 06:59:52.214668   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022209936Z" level=warning msg="error locating sandbox id 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2: sandbox 4f7be10df8fc7c6fb8c1b7e4c4d539333974e2b08fb5c7ae02d96c2a907cd9f2 not found"
	I1229 06:59:52.214690   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022222477Z" level=warning msg="error locating sandbox id b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0: sandbox b5e6c523a3812d48576001e9e106cedbf60f68221656df22876c21c1fa1554d0 not found"
	I1229 06:59:52.214703   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.022440032Z" level=info msg="Loading containers: done."
	I1229 06:59:52.214721   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037242165Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	I1229 06:59:52.214735   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.037335060Z" level=info msg="Initializing buildkit"
	I1229 06:59:52.214748   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.057350643Z" level=info msg="Completed buildkit initialization"
	I1229 06:59:52.214762   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.066932687Z" level=info msg="Daemon has completed initialization"
	I1229 06:59:52.214775   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067089967Z" level=info msg="API listen on /var/run/docker.sock"
	I1229 06:59:52.214788   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067190842Z" level=info msg="API listen on /run/docker.sock"
	I1229 06:59:52.215123   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 dockerd[4014]: time="2025-12-29T06:53:48.067284257Z" level=info msg="API listen on [::]:2376"
	I1229 06:59:52.215148   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Started Docker Application Container Engine.
	I1229 06:59:52.215180   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 cri-dockerd[1510]: time="2025-12-29T06:53:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a123d63a8edb9ae4246a56e508d8c463cc8d08af29fc9cb9b6e0929aba5d6780\""
	I1229 06:59:52.215194   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215210   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	I1229 06:59:52.215222   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215233   17440 command_runner.go:130] > Dec 29 06:53:48 functional-695625 systemd[1]: cri-docker.service: Consumed 1.284s CPU time, 18.5M memory peak.
	I1229 06:59:52.215247   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1229 06:59:52.215265   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	I1229 06:59:52.215283   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1229 06:59:52.215299   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start docker client with request timeout 0s"
	I1229 06:59:52.215312   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1229 06:59:52.215324   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Loaded network plugin cni"
	I1229 06:59:52.215340   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1229 06:59:52.215355   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Setting cgroupDriver systemd"
	I1229 06:59:52.215372   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1229 06:59:52.215389   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1229 06:59:52.215401   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Start cri-dockerd grpc backend"
	I1229 06:59:52.215409   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1229 06:59:52.215430   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215454   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215478   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215500   17440 command_runner.go:130] > Dec 29 06:53:49 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215517   17440 command_runner.go:130] > Dec 29 06:53:50 functional-695625 dockerd[4014]: time="2025-12-29T06:53:50.654005689Z" level=info msg="ignoring event" container=fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215532   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215549   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": unexpected command output nsenter: cannot open /proc/5603/ns/net: No such file or directory\n with error: exit status 1"
	I1229 06:59:52.215565   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.781948864Z" level=info msg="ignoring event" container=17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215578   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.786486841Z" level=info msg="ignoring event" container=1fc5fa7d92959587c9b226fbae1d62a43a53ebff128984dc88d95d1d4b914ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215593   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.807329963Z" level=info msg="ignoring event" container=b046056ff071b35753057444705e51c1057b95d46559e1e9b8547d49e18da5a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215606   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.833907949Z" level=info msg="ignoring event" container=6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215622   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.842344727Z" level=info msg="ignoring event" container=a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215643   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.846952655Z" level=info msg="ignoring event" container=4ed27973347711cbc183631c41c12607349bb00d5aed2e705f31e67f8f401bcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215667   17440 command_runner.go:130] > Dec 29 06:53:51 functional-695625 dockerd[4014]: time="2025-12-29T06:53:51.855675748Z" level=info msg="ignoring event" container=98261fa185f6e8d6798b9786902bd8dacc1c3d2b3c629e497537e2dbfc1811e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215688   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 dockerd[4014]: time="2025-12-29T06:53:52.089998903Z" level=info msg="ignoring event" container=a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215712   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215738   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215762   17440 command_runner.go:130] > Dec 29 06:53:52 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215839   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-wfq7m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a014f32abcd0141be679b6631a2dba3ddd9f5d2f50102e9808883af1630d0784\""
	I1229 06:59:52.215868   17440 command_runner.go:130] > Dec 29 06:53:53 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215888   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.150956960Z" level=error msg="collecting stats for container /k8s_etcd_etcd-functional-695625_kube-system_8da5c6c8980da2ca920a502b6f312384_1: invalid id: id is empty"
	I1229 06:59:52.215912   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: 2025/12/29 06:53:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:83)
	I1229 06:59:52.215937   17440 command_runner.go:130] > Dec 29 06:53:58 functional-695625 dockerd[4014]: time="2025-12-29T06:53:58.741840545Z" level=info msg="ignoring event" container=d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.215959   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:53:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1/resolv.conf as [nameserver 192.168.122.1]"
	I1229 06:59:52.215979   17440 command_runner.go:130] > Dec 29 06:53:59 functional-695625 cri-dockerd[4884]: W1229 06:53:59.025412    4884 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	I1229 06:59:52.216007   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216027   17440 command_runner.go:130] > Dec 29 06:54:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:54:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216051   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216067   17440 command_runner.go:130] > Dec 29 06:55:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:55:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216084   17440 command_runner.go:130] > Dec 29 06:56:00 functional-695625 dockerd[4014]: time="2025-12-29T06:56:00.626282205Z" level=info msg="ignoring event" container=78793b793ac7bf212626593654b66a72ee5b6a1a44629c55f4b79db622efccac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216097   17440 command_runner.go:130] > Dec 29 06:56:11 functional-695625 dockerd[4014]: time="2025-12-29T06:56:11.553142622Z" level=info msg="ignoring event" container=18d0015c724a8c309c34f49df00b8349be921326fd871377506d78feeed1dbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216112   17440 command_runner.go:130] > Dec 29 06:56:32 functional-695625 dockerd[4014]: time="2025-12-29T06:56:32.448119389Z" level=info msg="ignoring event" container=0ca8df932c9614c55569a494d042cf1b3ccf68510e98b089818e1f61fe2b0cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216128   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216141   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216157   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216171   17440 command_runner.go:130] > Dec 29 06:56:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:56:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216195   17440 command_runner.go:130] > Dec 29 06:57:22 functional-695625 dockerd[4014]: time="2025-12-29T06:57:22.465508622Z" level=info msg="ignoring event" container=b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216222   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216243   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216263   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216276   17440 command_runner.go:130] > Dec 29 06:57:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:57:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216289   17440 command_runner.go:130] > Dec 29 06:58:43 functional-695625 dockerd[4014]: time="2025-12-29T06:58:43.458641345Z" level=info msg="ignoring event" container=07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216304   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.011072219Z" level=info msg="ignoring event" container=173054afc2f39262ebb1466d26d5d6144bb8704054c087da601130a01d9caaf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216318   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.102126666Z" level=info msg="ignoring event" container=6b7711ee25a2df71f8c7d296f7186875ebd6ab978a71d33f177de0cc3055645b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216331   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.266578298Z" level=info msg="ignoring event" container=a7b1e961ded554edec9d882d7f1f6093e8446ab1020c81b638de16b76de139b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216346   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.365376654Z" level=info msg="ignoring event" container=fefef7c5591ea14974a99c19d99f86c4404e25de1b446a0cd0f0bcfffa63a991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216365   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.452640794Z" level=info msg="ignoring event" container=4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216380   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.557330204Z" level=info msg="ignoring event" container=d3819cc8ab802e5145e47325398f1da69b88a241482842040339b6b0d609a176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216392   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.666151542Z" level=info msg="ignoring event" container=0a96e34d38f8c1eccbbdf73d99dbbbe353acea505d84b69f0fdd4e54cb811123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216409   17440 command_runner.go:130] > Dec 29 06:58:50 functional-695625 dockerd[4014]: time="2025-12-29T06:58:50.751481082Z" level=info msg="ignoring event" container=f48fc04e347519b276e239ee9a6b0b8e093862313e46174a1815efae670eec9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1229 06:59:52.216427   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	I1229 06:59:52.216440   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	I1229 06:59:52.216455   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	I1229 06:59:52.216467   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	I1229 06:59:52.216484   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	I1229 06:59:52.216495   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	I1229 06:59:52.216512   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	I1229 06:59:52.216525   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	I1229 06:59:52.216542   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	I1229 06:59:52.216554   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	I1229 06:59:52.216568   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	I1229 06:59:52.216582   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	I1229 06:59:52.216596   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	I1229 06:59:52.216611   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	I1229 06:59:52.216628   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	I1229 06:59:52.216642   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	I1229 06:59:52.216660   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	I1229 06:59:52.216673   17440 command_runner.go:130] > Dec 29 06:58:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T06:58:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	I1229 06:59:52.238629   17440 logs.go:123] Gathering logs for container status ...
	I1229 06:59:52.238668   17440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 06:59:52.287732   17440 command_runner.go:130] > CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	W1229 06:59:52.290016   17440 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 06:59:52.290080   17440 out.go:285] * 
	W1229 06:59:52.290145   17440 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.290156   17440 out.go:285] * 
	W1229 06:59:52.290452   17440 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:59:52.293734   17440 out.go:203] 
	W1229 06:59:52.295449   17440 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 06:59:52.295482   17440 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 06:59:52.295500   17440 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 06:59:52.296904   17440 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 07:05:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:05:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974': Error response from daemon: No such container: d81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd81259f64136cb875391e3242b1e87ce8484d93804fd3fd8f058e794000af974'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be': Error response from daemon: No such container: bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bd96b57aa9fceb297b978973bf1ec18d239034f519208bcbbdb6e3642bd688be'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e': Error response from daemon: No such container: a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a79d99ad3fde3b39ff452b10ae85c19ada97b63b0d02bd1df136d6abdc0aab3e'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535': Error response from daemon: No such container: 4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d49952084c921663c4ca3a5954c1e5f3579ae4ede51cd2af5f26d39cffeb535'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b': Error response from daemon: No such container: fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb6db97d8ffe47f841dad5663bec255840cbd95c984cdcea62e4a40ce9aadf6b'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d': Error response from daemon: No such container: 8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8911777281f410454c05e6fe7890cd18afd703aba8c259833fbd1b9504e6954d'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13': Error response from daemon: No such container: 17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '17fe16a2822a8e49aab04292eeabfe463223d6c2df3f3c9cb22a3638b3ceab13'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00': Error response from daemon: No such container: 6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6f69ba6a1553a587ecf566f8e32713045c125b882d7d42b21f53e313e21aed00'"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="error getting RW layer size for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd': Error response from daemon: No such container: fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd"
	Dec 29 07:06:58 functional-695625 cri-dockerd[4884]: time="2025-12-29T07:06:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fd22eb0d6c14aa574181a25c39c62a49aca8e387257f4656bcd9f72653cd22fd'"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> kernel <==
	 07:08:55 up 16 min,  0 users,  load average: 0.01, 0.08, 0.11
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:39 functional-695625 kubelet[6517]: E1229 06:58:39.185100    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184709    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.184771    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-695625" containerName="etcd"
	Dec 29 06:58:40 functional-695625 kubelet[6517]: E1229 06:58:40.308010    6517 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-695625\" not found"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.185947    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.186016    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.186033    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503148    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503225    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: I1229 06:58:43.503241    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.798881691s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (153.38s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (158.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695625 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1229 07:10:06.155207   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695625 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 81 (1m6.334784733s)

                                                
                                                
-- stdout --
	* [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Related issue: https://github.com/kubernetes/minikube/issues/5484

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-695625 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 81
functional_test.go:776: restart took 1m6.335281095s for "functional-695625" cluster.
I1229 07:10:18.108226   13486 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.835700864s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.555658354s)
helpers_test.go:261: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:52 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                          │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                          │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                          │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ delete  │ -p nospam-039815                                                                                         │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ start   │ -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:53 UTC │
	│ start   │ -p functional-695625 --alsologtostderr -v=8                                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:53 UTC │                     │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.1                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:03 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.3                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:latest                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add minikube-local-cache-test:functional-695625                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache delete minikube-local-cache-test:functional-695625                               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl images                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo docker rmi registry.k8s.io/pause:latest                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	│ cache   │ functional-695625 cache reload                                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ kubectl │ functional-695625 kubectl -- --context functional-695625 get pods                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	│ start   │ -p functional-695625 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:11.823825   21144 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:11.824087   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824091   21144 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:11.824094   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824292   21144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:09:11.824739   21144 out.go:368] Setting JSON to false
	I1229 07:09:11.825573   21144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3102,"bootTime":1766989050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:09:11.825626   21144 start.go:143] virtualization: kvm guest
	I1229 07:09:11.828181   21144 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:09:11.830065   21144 notify.go:221] Checking for updates...
	I1229 07:09:11.830099   21144 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:09:11.832513   21144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:11.834171   21144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:09:11.835714   21144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:09:11.837182   21144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:09:11.838613   21144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:09:11.840293   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:11.840375   21144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:11.872577   21144 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:09:11.874034   21144 start.go:309] selected driver: kvm2
	I1229 07:09:11.874043   21144 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.874148   21144 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:09:11.875008   21144 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:09:11.875031   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:11.875088   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:11.875135   21144 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.875236   21144 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:09:11.877238   21144 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 07:09:11.878662   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:11.878689   21144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 07:09:11.878696   21144 cache.go:65] Caching tarball of preloaded images
	I1229 07:09:11.878855   21144 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:09:11.878865   21144 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:09:11.878973   21144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 07:09:11.879179   21144 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:09:11.879222   21144 start.go:364] duration metric: took 30.478µs to acquireMachinesLock for "functional-695625"
	I1229 07:09:11.879237   21144 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:09:11.879242   21144 fix.go:54] fixHost starting: 
	I1229 07:09:11.881414   21144 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 07:09:11.881433   21144 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:09:11.883307   21144 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 07:09:11.883328   21144 machine.go:94] provisionDockerMachine start ...
	I1229 07:09:11.886670   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887242   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.887262   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887496   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.887732   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.887736   21144 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:09:11.991696   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:11.991727   21144 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 07:09:11.994978   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995530   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.995549   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995737   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.995938   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.995945   21144 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 07:09:12.119417   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:12.122745   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.123300   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123538   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.123821   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.123838   21144 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:09:12.232450   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.232465   21144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:09:12.232498   21144 buildroot.go:174] setting up certificates
	I1229 07:09:12.232516   21144 provision.go:84] configureAuth start
	I1229 07:09:12.235023   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.235391   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.235407   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.237672   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238025   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.238038   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238176   21144 provision.go:143] copyHostCerts
	I1229 07:09:12.238217   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:09:12.238229   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:09:12.238296   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:09:12.238403   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:09:12.238407   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:09:12.238432   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:09:12.238491   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:09:12.238499   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:09:12.238520   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:09:12.238615   21144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 07:09:12.295310   21144 provision.go:177] copyRemoteCerts
	I1229 07:09:12.295367   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:09:12.298377   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.298846   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.298865   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.299023   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.383429   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:09:12.418297   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:09:12.449932   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:09:12.480676   21144 provision.go:87] duration metric: took 248.13234ms to configureAuth
	I1229 07:09:12.480699   21144 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:09:12.480912   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:12.483638   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484264   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.484283   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484490   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.484748   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.484754   21144 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:09:12.588448   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:09:12.588460   21144 buildroot.go:70] root file system type: tmpfs
	I1229 07:09:12.588547   21144 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:09:12.591297   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591753   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.591783   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591962   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.592154   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.592188   21144 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:09:12.712416   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:09:12.715731   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716163   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.716179   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716376   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.716633   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.716644   21144 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:09:12.827453   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.827470   21144 machine.go:97] duration metric: took 944.134387ms to provisionDockerMachine
	I1229 07:09:12.827483   21144 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 07:09:12.827495   21144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:09:12.827561   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:09:12.831103   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831472   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.831495   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831644   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.914033   21144 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:09:12.918908   21144 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:09:12.918929   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:09:12.919006   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:09:12.919110   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:09:12.919214   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 07:09:12.919253   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 07:09:12.931251   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:12.961219   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 07:09:12.994141   21144 start.go:296] duration metric: took 166.645883ms for postStartSetup
	I1229 07:09:12.994171   21144 fix.go:56] duration metric: took 1.114929026s for fixHost
	I1229 07:09:12.997310   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997695   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.997713   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997933   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.998123   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.998127   21144 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:09:13.101274   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766992153.087186683
	
	I1229 07:09:13.101290   21144 fix.go:216] guest clock: 1766992153.087186683
	I1229 07:09:13.101310   21144 fix.go:229] Guest: 2025-12-29 07:09:13.087186683 +0000 UTC Remote: 2025-12-29 07:09:12.994173684 +0000 UTC m=+1.216768593 (delta=93.012999ms)
	I1229 07:09:13.101325   21144 fix.go:200] guest clock delta is within tolerance: 93.012999ms
	I1229 07:09:13.101328   21144 start.go:83] releasing machines lock for "functional-695625", held for 1.222099797s
	I1229 07:09:13.104421   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.104778   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.104809   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.105311   21144 ssh_runner.go:195] Run: cat /version.json
	I1229 07:09:13.105384   21144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:09:13.108188   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108465   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.108487   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108626   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.108649   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.109293   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109456   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.188367   21144 ssh_runner.go:195] Run: systemctl --version
	I1229 07:09:13.214864   21144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:09:13.221871   21144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:09:13.221939   21144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:09:13.234387   21144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:09:13.234409   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.234439   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.234555   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.265557   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:09:13.279647   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:09:13.292829   21144 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:09:13.292880   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:09:13.305636   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.318870   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:09:13.332057   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.345233   21144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:09:13.358882   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:09:13.371537   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:09:13.384369   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:09:13.398107   21144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:09:13.409570   21144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:09:13.422369   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:13.586635   21144 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:09:13.630254   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.630285   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.630342   21144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:09:13.649562   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.669222   21144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:09:13.690312   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.709458   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:09:13.726376   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.751705   21144 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:09:13.756140   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:09:13.768404   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:09:13.789872   21144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:09:13.962749   21144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:09:14.121274   21144 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:09:14.121382   21144 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:09:14.144014   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:09:14.159574   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:14.318011   21144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:09:14.810377   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:09:14.828678   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:09:14.845136   21144 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:09:14.867262   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:14.884057   21144 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:09:15.045033   21144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:09:15.204271   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.357839   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:09:15.393570   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:09:15.410289   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.569395   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:09:15.702195   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:15.721913   21144 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:09:15.721983   21144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:09:15.728173   21144 start.go:574] Will wait 60s for crictl version
	I1229 07:09:15.728240   21144 ssh_runner.go:195] Run: which crictl
	I1229 07:09:15.732532   21144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:09:15.768758   21144 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:09:15.768832   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.798391   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.827196   21144 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:09:15.830472   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.830929   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:15.830951   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.831160   21144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:09:15.838098   21144 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1229 07:09:15.839808   21144 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:09:15.839935   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:15.840017   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.861298   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.861312   21144 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:09:15.861369   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.881522   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.881540   21144 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:09:15.881547   21144 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 07:09:15.881633   21144 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:09:15.881681   21144 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:09:15.935676   21144 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1229 07:09:15.935701   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:15.935727   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:15.935738   21144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:09:15.935764   21144 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:09:15.935924   21144 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:09:15.935984   21144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:09:15.948561   21144 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:09:15.948636   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:09:15.961301   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 07:09:15.983422   21144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:09:16.005682   21144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1229 07:09:16.029474   21144 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 07:09:16.034228   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:16.201925   21144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:09:16.221870   21144 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 07:09:16.221886   21144 certs.go:195] generating shared ca certs ...
	I1229 07:09:16.221906   21144 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:09:16.222138   21144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:09:16.222204   21144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:09:16.222214   21144 certs.go:257] generating profile certs ...
	I1229 07:09:16.222330   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 07:09:16.222384   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 07:09:16.222444   21144 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 07:09:16.222593   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:09:16.222640   21144 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:09:16.222649   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:09:16.222683   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:09:16.222732   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:09:16.222762   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:09:16.222857   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:16.223814   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:09:16.259745   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:09:16.289889   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:09:16.326260   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:09:16.358438   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:09:16.390832   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:09:16.422104   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:09:16.453590   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:09:16.484628   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:09:16.515097   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:09:16.545423   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:09:16.576428   21144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:09:16.598198   21144 ssh_runner.go:195] Run: openssl version
	I1229 07:09:16.604919   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.616843   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:09:16.628930   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634304   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634358   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.642266   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:09:16.654506   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.666895   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:09:16.678959   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684549   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684610   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.692570   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:09:16.704782   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.717059   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:09:16.728888   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734067   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734122   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.741254   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:09:16.753067   21144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:09:16.758242   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:09:16.765682   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:09:16.773077   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:09:16.780312   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:09:16.787576   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:09:16.794989   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:09:16.802975   21144 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:16.803131   21144 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:09:16.822479   21144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:09:16.835464   21144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:09:16.847946   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:16.859599   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:16.859610   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:16.859660   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:16.871193   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:16.871261   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:16.883523   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:16.896218   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:16.896282   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:16.911191   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.924861   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:16.924909   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.944303   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:16.962588   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:16.962645   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:16.977278   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.182115   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.182201   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.182249   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.182388   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.182445   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.182524   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.184031   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.184089   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	W1229 07:09:17.184195   21144 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:09:17.184268   21144 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:09:17.243543   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:09:17.260150   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:17.273154   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:17.273164   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:17.273225   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:17.284873   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:17.284932   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:17.296898   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:17.307707   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:17.307770   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:17.320033   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.331276   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:17.331337   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.342966   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:17.354640   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:17.354687   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:17.366632   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.552872   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.552925   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.552984   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.553138   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.553225   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.553323   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.554897   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.554936   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:09:17.555003   21144 kubeadm.go:403] duration metric: took 752.035112ms to StartCluster
	I1229 07:09:17.555040   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:09:17.555086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:09:17.587966   21144 cri.go:96] found id: ""
	I1229 07:09:17.587989   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.587998   21144 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:09:17.588005   21144 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:09:17.588086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:09:17.620754   21144 cri.go:96] found id: ""
	I1229 07:09:17.620772   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.620788   21144 logs.go:284] No container was found matching "etcd"
	I1229 07:09:17.620811   21144 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:09:17.620876   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:09:17.652137   21144 cri.go:96] found id: ""
	I1229 07:09:17.652158   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.652168   21144 logs.go:284] No container was found matching "coredns"
	I1229 07:09:17.652174   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:09:17.652227   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:09:17.684490   21144 cri.go:96] found id: ""
	I1229 07:09:17.684506   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.684514   21144 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:09:17.684520   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:09:17.684583   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:09:17.716008   21144 cri.go:96] found id: ""
	I1229 07:09:17.716024   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.716031   21144 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:09:17.716036   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:09:17.716108   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:09:17.749478   21144 cri.go:96] found id: ""
	I1229 07:09:17.749496   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.749504   21144 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:09:17.749511   21144 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:09:17.749573   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:09:17.781397   21144 cri.go:96] found id: ""
	I1229 07:09:17.781414   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.781421   21144 logs.go:284] No container was found matching "kindnet"
	I1229 07:09:17.781425   21144 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:09:17.781474   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:09:17.813076   21144 cri.go:96] found id: ""
	I1229 07:09:17.813093   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.813116   21144 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:09:17.813127   21144 logs.go:123] Gathering logs for container status ...
	I1229 07:09:17.813139   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:09:17.855147   21144 logs.go:123] Gathering logs for kubelet ...
	I1229 07:09:17.855165   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:09:17.931133   21144 logs.go:123] Gathering logs for dmesg ...
	I1229 07:09:17.931160   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:09:17.948718   21144 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:09:17.948738   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:10:18.030239   21144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.08145445s)
	W1229 07:10:18.030299   21144 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 07:10:18.030308   21144 logs.go:123] Gathering logs for Docker ...
	I1229 07:10:18.030323   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1229 07:10:18.094027   21144 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:10:18.094084   21144 out.go:285] * 
	W1229 07:10:18.094156   21144 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.094163   21144 out.go:285] * 
	W1229 07:10:18.094381   21144 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:10:18.097345   21144 out.go:203] 
	W1229 07:10:18.098865   21144 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.098945   21144 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 07:10:18.098968   21144 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 07:10:18.100537   21144 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> kernel <==
	 07:11:34 up 19 min,  0 users,  load average: 0.02, 0.07, 0.09
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.735333267s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (158.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (152.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-695625 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-695625 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (1m0.060318885s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-695625 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.792883594s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
E1229 07:13:43.090938   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.564298443s)
helpers_test.go:261: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:51 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:51 UTC │ 29 Dec 25 06:52 UTC │
	│ unpause │ nospam-039815 --log_dir /tmp/nospam-039815 unpause                                                       │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                          │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                          │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ stop    │ nospam-039815 --log_dir /tmp/nospam-039815 stop                                                          │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ delete  │ -p nospam-039815                                                                                         │ nospam-039815     │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:52 UTC │
	│ start   │ -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:52 UTC │ 29 Dec 25 06:53 UTC │
	│ start   │ -p functional-695625 --alsologtostderr -v=8                                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 06:53 UTC │                     │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.1                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:03 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:3.3                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add registry.k8s.io/pause:latest                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache add minikube-local-cache-test:functional-695625                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ functional-695625 cache delete minikube-local-cache-test:functional-695625                               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl images                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo docker rmi registry.k8s.io/pause:latest                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	│ cache   │ functional-695625 cache reload                                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ ssh     │ functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │ 29 Dec 25 07:04 UTC │
	│ kubectl │ functional-695625 kubectl -- --context functional-695625 get pods                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:04 UTC │                     │
	│ start   │ -p functional-695625 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:11.823825   21144 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:11.824087   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824091   21144 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:11.824094   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824292   21144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:09:11.824739   21144 out.go:368] Setting JSON to false
	I1229 07:09:11.825573   21144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3102,"bootTime":1766989050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:09:11.825626   21144 start.go:143] virtualization: kvm guest
	I1229 07:09:11.828181   21144 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:09:11.830065   21144 notify.go:221] Checking for updates...
	I1229 07:09:11.830099   21144 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:09:11.832513   21144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:11.834171   21144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:09:11.835714   21144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:09:11.837182   21144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:09:11.838613   21144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:09:11.840293   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:11.840375   21144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:11.872577   21144 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:09:11.874034   21144 start.go:309] selected driver: kvm2
	I1229 07:09:11.874043   21144 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.874148   21144 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:09:11.875008   21144 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:09:11.875031   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:11.875088   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:11.875135   21144 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.875236   21144 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:09:11.877238   21144 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 07:09:11.878662   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:11.878689   21144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 07:09:11.878696   21144 cache.go:65] Caching tarball of preloaded images
	I1229 07:09:11.878855   21144 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:09:11.878865   21144 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:09:11.878973   21144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 07:09:11.879179   21144 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:09:11.879222   21144 start.go:364] duration metric: took 30.478µs to acquireMachinesLock for "functional-695625"
	I1229 07:09:11.879237   21144 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:09:11.879242   21144 fix.go:54] fixHost starting: 
	I1229 07:09:11.881414   21144 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 07:09:11.881433   21144 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:09:11.883307   21144 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 07:09:11.883328   21144 machine.go:94] provisionDockerMachine start ...
	I1229 07:09:11.886670   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887242   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.887262   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887496   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.887732   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.887736   21144 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:09:11.991696   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:11.991727   21144 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 07:09:11.994978   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995530   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.995549   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995737   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.995938   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.995945   21144 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 07:09:12.119417   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:12.122745   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.123300   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123538   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.123821   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.123838   21144 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:09:12.232450   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.232465   21144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:09:12.232498   21144 buildroot.go:174] setting up certificates
	I1229 07:09:12.232516   21144 provision.go:84] configureAuth start
	I1229 07:09:12.235023   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.235391   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.235407   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.237672   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238025   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.238038   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238176   21144 provision.go:143] copyHostCerts
	I1229 07:09:12.238217   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:09:12.238229   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:09:12.238296   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:09:12.238403   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:09:12.238407   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:09:12.238432   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:09:12.238491   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:09:12.238499   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:09:12.238520   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:09:12.238615   21144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 07:09:12.295310   21144 provision.go:177] copyRemoteCerts
	I1229 07:09:12.295367   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:09:12.298377   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.298846   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.298865   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.299023   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.383429   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:09:12.418297   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:09:12.449932   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:09:12.480676   21144 provision.go:87] duration metric: took 248.13234ms to configureAuth
	I1229 07:09:12.480699   21144 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:09:12.480912   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:12.483638   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484264   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.484283   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484490   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.484748   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.484754   21144 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:09:12.588448   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:09:12.588460   21144 buildroot.go:70] root file system type: tmpfs
	I1229 07:09:12.588547   21144 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:09:12.591297   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591753   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.591783   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591962   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.592154   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.592188   21144 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:09:12.712416   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:09:12.715731   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716163   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.716179   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716376   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.716633   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.716644   21144 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:09:12.827453   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.827470   21144 machine.go:97] duration metric: took 944.134387ms to provisionDockerMachine
	I1229 07:09:12.827483   21144 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 07:09:12.827495   21144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:09:12.827561   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:09:12.831103   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831472   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.831495   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831644   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.914033   21144 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:09:12.918908   21144 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:09:12.918929   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:09:12.919006   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:09:12.919110   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:09:12.919214   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 07:09:12.919253   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 07:09:12.931251   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:12.961219   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 07:09:12.994141   21144 start.go:296] duration metric: took 166.645883ms for postStartSetup
	I1229 07:09:12.994171   21144 fix.go:56] duration metric: took 1.114929026s for fixHost
	I1229 07:09:12.997310   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997695   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.997713   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997933   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.998123   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.998127   21144 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:09:13.101274   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766992153.087186683
	
	I1229 07:09:13.101290   21144 fix.go:216] guest clock: 1766992153.087186683
	I1229 07:09:13.101310   21144 fix.go:229] Guest: 2025-12-29 07:09:13.087186683 +0000 UTC Remote: 2025-12-29 07:09:12.994173684 +0000 UTC m=+1.216768593 (delta=93.012999ms)
	I1229 07:09:13.101325   21144 fix.go:200] guest clock delta is within tolerance: 93.012999ms
	I1229 07:09:13.101328   21144 start.go:83] releasing machines lock for "functional-695625", held for 1.222099797s
	I1229 07:09:13.104421   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.104778   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.104809   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.105311   21144 ssh_runner.go:195] Run: cat /version.json
	I1229 07:09:13.105384   21144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:09:13.108188   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108465   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.108487   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108626   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.108649   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.109293   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109456   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.188367   21144 ssh_runner.go:195] Run: systemctl --version
	I1229 07:09:13.214864   21144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:09:13.221871   21144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:09:13.221939   21144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:09:13.234387   21144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:09:13.234409   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.234439   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.234555   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.265557   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:09:13.279647   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:09:13.292829   21144 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:09:13.292880   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:09:13.305636   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.318870   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:09:13.332057   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.345233   21144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:09:13.358882   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:09:13.371537   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:09:13.384369   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:09:13.398107   21144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:09:13.409570   21144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:09:13.422369   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:13.586635   21144 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:09:13.630254   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.630285   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.630342   21144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:09:13.649562   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.669222   21144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:09:13.690312   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.709458   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:09:13.726376   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.751705   21144 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:09:13.756140   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:09:13.768404   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:09:13.789872   21144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:09:13.962749   21144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:09:14.121274   21144 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:09:14.121382   21144 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:09:14.144014   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:09:14.159574   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:14.318011   21144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:09:14.810377   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:09:14.828678   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:09:14.845136   21144 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:09:14.867262   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:14.884057   21144 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:09:15.045033   21144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:09:15.204271   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.357839   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:09:15.393570   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:09:15.410289   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.569395   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:09:15.702195   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:15.721913   21144 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:09:15.721983   21144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:09:15.728173   21144 start.go:574] Will wait 60s for crictl version
	I1229 07:09:15.728240   21144 ssh_runner.go:195] Run: which crictl
	I1229 07:09:15.732532   21144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:09:15.768758   21144 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:09:15.768832   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.798391   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.827196   21144 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:09:15.830472   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.830929   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:15.830951   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.831160   21144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:09:15.838098   21144 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1229 07:09:15.839808   21144 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:09:15.839935   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:15.840017   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.861298   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.861312   21144 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:09:15.861369   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.881522   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.881540   21144 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:09:15.881547   21144 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 07:09:15.881633   21144 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:09:15.881681   21144 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:09:15.935676   21144 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1229 07:09:15.935701   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:15.935727   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:15.935738   21144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:09:15.935764   21144 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:09:15.935924   21144 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:09:15.935984   21144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:09:15.948561   21144 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:09:15.948636   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:09:15.961301   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 07:09:15.983422   21144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:09:16.005682   21144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1229 07:09:16.029474   21144 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 07:09:16.034228   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:16.201925   21144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:09:16.221870   21144 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 07:09:16.221886   21144 certs.go:195] generating shared ca certs ...
	I1229 07:09:16.221906   21144 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:09:16.222138   21144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:09:16.222204   21144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:09:16.222214   21144 certs.go:257] generating profile certs ...
	I1229 07:09:16.222330   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 07:09:16.222384   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 07:09:16.222444   21144 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 07:09:16.222593   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:09:16.222640   21144 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:09:16.222649   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:09:16.222683   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:09:16.222732   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:09:16.222762   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:09:16.222857   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:16.223814   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:09:16.259745   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:09:16.289889   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:09:16.326260   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:09:16.358438   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:09:16.390832   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:09:16.422104   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:09:16.453590   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:09:16.484628   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:09:16.515097   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:09:16.545423   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:09:16.576428   21144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:09:16.598198   21144 ssh_runner.go:195] Run: openssl version
	I1229 07:09:16.604919   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.616843   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:09:16.628930   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634304   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634358   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.642266   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:09:16.654506   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.666895   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:09:16.678959   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684549   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684610   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.692570   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:09:16.704782   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.717059   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:09:16.728888   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734067   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734122   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.741254   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:09:16.753067   21144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:09:16.758242   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:09:16.765682   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:09:16.773077   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:09:16.780312   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:09:16.787576   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:09:16.794989   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:09:16.802975   21144 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:16.803131   21144 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:09:16.822479   21144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:09:16.835464   21144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:09:16.847946   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:16.859599   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:16.859610   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:16.859660   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:16.871193   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:16.871261   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:16.883523   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:16.896218   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:16.896282   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:16.911191   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.924861   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:16.924909   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.944303   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:16.962588   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:16.962645   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:16.977278   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.182115   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.182201   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.182249   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.182388   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.182445   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.182524   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.184031   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.184089   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	W1229 07:09:17.184195   21144 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:09:17.184268   21144 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:09:17.243543   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:09:17.260150   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:17.273154   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:17.273164   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:17.273225   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:17.284873   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:17.284932   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:17.296898   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:17.307707   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:17.307770   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:17.320033   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.331276   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:17.331337   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.342966   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:17.354640   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:17.354687   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:17.366632   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.552872   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.552925   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.552984   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.553138   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.553225   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.553323   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.554897   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.554936   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:09:17.555003   21144 kubeadm.go:403] duration metric: took 752.035112ms to StartCluster
	I1229 07:09:17.555040   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:09:17.555086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:09:17.587966   21144 cri.go:96] found id: ""
	I1229 07:09:17.587989   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.587998   21144 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:09:17.588005   21144 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:09:17.588086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:09:17.620754   21144 cri.go:96] found id: ""
	I1229 07:09:17.620772   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.620788   21144 logs.go:284] No container was found matching "etcd"
	I1229 07:09:17.620811   21144 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:09:17.620876   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:09:17.652137   21144 cri.go:96] found id: ""
	I1229 07:09:17.652158   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.652168   21144 logs.go:284] No container was found matching "coredns"
	I1229 07:09:17.652174   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:09:17.652227   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:09:17.684490   21144 cri.go:96] found id: ""
	I1229 07:09:17.684506   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.684514   21144 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:09:17.684520   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:09:17.684583   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:09:17.716008   21144 cri.go:96] found id: ""
	I1229 07:09:17.716024   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.716031   21144 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:09:17.716036   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:09:17.716108   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:09:17.749478   21144 cri.go:96] found id: ""
	I1229 07:09:17.749496   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.749504   21144 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:09:17.749511   21144 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:09:17.749573   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:09:17.781397   21144 cri.go:96] found id: ""
	I1229 07:09:17.781414   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.781421   21144 logs.go:284] No container was found matching "kindnet"
	I1229 07:09:17.781425   21144 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:09:17.781474   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:09:17.813076   21144 cri.go:96] found id: ""
	I1229 07:09:17.813093   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.813116   21144 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:09:17.813127   21144 logs.go:123] Gathering logs for container status ...
	I1229 07:09:17.813139   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:09:17.855147   21144 logs.go:123] Gathering logs for kubelet ...
	I1229 07:09:17.855165   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:09:17.931133   21144 logs.go:123] Gathering logs for dmesg ...
	I1229 07:09:17.931160   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:09:17.948718   21144 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:09:17.948738   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:10:18.030239   21144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.08145445s)
	W1229 07:10:18.030299   21144 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 07:10:18.030308   21144 logs.go:123] Gathering logs for Docker ...
	I1229 07:10:18.030323   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1229 07:10:18.094027   21144 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:10:18.094084   21144 out.go:285] * 
	W1229 07:10:18.094156   21144 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.094163   21144 out.go:285] * 
	W1229 07:10:18.094381   21144 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:10:18.097345   21144 out.go:203] 
	W1229 07:10:18.098865   21144 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.098945   21144 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 07:10:18.098968   21144 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 07:10:18.100537   21144 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> kernel <==
	 07:14:06 up 21 min,  0 users,  load average: 0.03, 0.06, 0.08
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.678887331s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (152.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (120.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-695625 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Non-zero exit: kubectl --context functional-695625 apply -f testdata/invalidsvc.yaml: exit status 1 (2m0.16287802s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "invalid-svc", Namespace: "default"
	from server for: "testdata/invalidsvc.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods invalid-svc)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service"
	Name: "invalid-svc", Namespace: "default"
	from server for: "testdata/invalidsvc.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get services invalid-svc)

                                                
                                                
** /stderr **
functional_test.go:2333: kubectl --context functional-695625 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (120.16s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (168.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-695625 --alsologtostderr -v=1]
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs
functional_test.go:934: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs: (1m0.561459231s)
functional_test.go:938: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-695625 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-695625 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-695625 --alsologtostderr -v=1] stderr:
I1229 07:24:13.645325   25963 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:13.645599   25963 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:13.645608   25963 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:13.645613   25963 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:13.645789   25963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:24:13.646061   25963 mustload.go:66] Loading cluster: functional-695625
I1229 07:24:13.646413   25963 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:13.648241   25963 host.go:66] Checking if "functional-695625" exists ...
I1229 07:24:13.648435   25963 api_server.go:166] Checking apiserver status ...
I1229 07:24:13.648473   25963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1229 07:24:13.650522   25963 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:13.650933   25963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 08:22:22 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:24:13.650960   25963 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:13.651103   25963 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:24:13.739076   25963 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2416/cgroup
I1229 07:24:13.750535   25963 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2416/cgroup
I1229 07:24:13.762126   25963 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd173c000af26dcef62569d3a5345fcae.slice/docker-4b032678478a0db80f17dd1d989d5d3ad03f5c19d261d887ee8bbc80c0ef716c.scope/cgroup.freeze
I1229 07:24:13.774001   25963 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
I1229 07:24:18.774707   25963 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1229 07:24:18.774836   25963 retry.go:84] will retry after 200ms: state is "Stopped"
I1229 07:24:19.007271   25963 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
I1229 07:24:24.007972   25963 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1229 07:24:24.008015   25963 retry.go:84] will retry after 200ms: state is "Stopped"
I1229 07:24:24.248510   25963 api_server.go:299] Checking apiserver healthz at https://192.168.39.121:8441/healthz ...
I1229 07:24:29.249068   25963 api_server.go:315] stopped: https://192.168.39.121:8441/healthz: Get "https://192.168.39.121:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1229 07:24:29.251178   25963 out.go:179] * The control-plane node functional-695625 apiserver is not running: (state=Stopped)
I1229 07:24:29.252663   25963 out.go:179]   To start a cluster, run: "minikube start -p functional-695625"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.79494609s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.558059767s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-695625 service list                                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service        │ functional-695625 service list -o json                                                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start          │ -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start          │ -p functional-695625 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service        │ functional-695625 service --namespace=default --https --url hello-node                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service        │ functional-695625 service hello-node --url --format={{.IP}}                                                                │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service        │ functional-695625 service hello-node --url                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ start          │ -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ cp             │ functional-695625 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ ssh            │ functional-695625 ssh -n functional-695625 sudo cat /home/docker/cp-test.txt                                               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ cp             │ functional-695625 cp functional-695625:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3240797659/001/cp-test.txt │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ ssh            │ functional-695625 ssh -n functional-695625 sudo cat /home/docker/cp-test.txt                                               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ cp             │ functional-695625 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ ssh            │ functional-695625 ssh -n functional-695625 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ dashboard      │ --url --port 0 -p functional-695625 --alsologtostderr -v=1                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ update-context │ functional-695625 update-context --alsologtostderr -v=2                                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ update-context │ functional-695625 update-context --alsologtostderr -v=2                                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ update-context │ functional-695625 update-context --alsologtostderr -v=2                                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ image          │ functional-695625 image ls --format short --alsologtostderr                                                                │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ image          │ functional-695625 image ls --format yaml --alsologtostderr                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ ssh            │ functional-695625 ssh pgrep buildkitd                                                                                      │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ image          │ functional-695625 image build -t localhost/my-image:functional-695625 testdata/build --alsologtostderr                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ image          │ functional-695625 image ls                                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ image          │ functional-695625 image ls --format json --alsologtostderr                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ image          │ functional-695625 image ls --format table --alsologtostderr                                                                │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:24:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:24:12.577834   25866 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:24:12.578108   25866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:12.578120   25866 out.go:374] Setting ErrFile to fd 2...
	I1229 07:24:12.578124   25866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:12.578391   25866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:24:12.578874   25866 out.go:368] Setting JSON to false
	I1229 07:24:12.579759   25866 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4003,"bootTime":1766989050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:24:12.579858   25866 start.go:143] virtualization: kvm guest
	I1229 07:24:12.582170   25866 out.go:179] * [functional-695625] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1229 07:24:12.583620   25866 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:24:12.583612   25866 notify.go:221] Checking for updates...
	I1229 07:24:12.586166   25866 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:24:12.587629   25866 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:24:12.589023   25866 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:24:12.590535   25866 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:24:12.591900   25866 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:24:12.593453   25866 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:12.593981   25866 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:24:12.625888   25866 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1229 07:24:12.627045   25866 start.go:309] selected driver: kvm2
	I1229 07:24:12.627069   25866 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:12.627185   25866 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:24:12.629157   25866 out.go:203] 
	W1229 07:24:12.630360   25866 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1229 07:24:12.631483   25866 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	[Dec29 07:24] kauditd_printk_skb: 259 callbacks suppressed
	
	
	==> kernel <==
	 07:26:46 up 34 min,  0 users,  load average: 0.01, 0.05, 0.07
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.747416231s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (168.45s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (139.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 status: exit status 2 (15.836100038s)

                                                
                                                
-- stdout --
	functional-695625
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-695625 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (15.747597395s)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-695625 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 status -o json: exit status 2 (15.924149274s)

                                                
                                                
-- stdout --
	{"Name":"functional-695625","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-695625 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.736036245s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.574569869s)
helpers_test.go:261: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-695625 ssh cat /mount-9p/test-1766992710427127984                                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount2                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount3                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ mount   │ -p functional-695625 --kill=true                                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ addons  │ functional-695625 addons list                                                                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:20 UTC │ 29 Dec 25 07:20 UTC │
	│ addons  │ functional-695625 addons list -o json                                                                                            │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:20 UTC │ 29 Dec 25 07:20 UTC │
	│ service │ functional-695625 service list                                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service │ functional-695625 service list -o json                                                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start   │ -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start   │ -p functional-695625 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service │ functional-695625 service --namespace=default --https --url hello-node                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service │ functional-695625 service hello-node --url --format={{.IP}}                                                                      │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service │ functional-695625 service hello-node --url                                                                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:21:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:21:52.951895   25207 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:21:52.951994   25207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.952000   25207 out.go:374] Setting ErrFile to fd 2...
	I1229 07:21:52.952005   25207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.952198   25207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:21:52.952599   25207 out.go:368] Setting JSON to false
	I1229 07:21:52.953416   25207 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3863,"bootTime":1766989050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:21:52.953473   25207 start.go:143] virtualization: kvm guest
	I1229 07:21:52.955852   25207 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:21:52.957368   25207 notify.go:221] Checking for updates...
	I1229 07:21:52.957449   25207 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:21:52.958966   25207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:21:52.960365   25207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:21:52.962039   25207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:21:52.963554   25207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:21:52.965380   25207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:21:52.967500   25207 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:21:52.968091   25207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:21:53.001192   25207 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:21:53.002517   25207 start.go:309] selected driver: kvm2
	I1229 07:21:53.002537   25207 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:53.002635   25207 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:21:53.003697   25207 cni.go:84] Creating CNI manager for ""
	I1229 07:21:53.003813   25207 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:21:53.003869   25207 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:53.005457   25207 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> kernel <==
	 07:23:56 up 31 min,  0 users,  load average: 0.11, 0.09, 0.08
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.688859768s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (139.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (247.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-695625 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1641: (dbg) Non-zero exit: kubectl --context functional-695625 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server: exit status 1 (34.05719522s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Timeout: request did not complete within requested timeout - context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:1643: failed to create hello-node deployment with this command "kubectl --context functional-695625 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server": exit status 1.
functional_test.go:1613: service test failed - dumping debug information
functional_test.go:1614: -----------------------service failure post-mortem--------------------------------
functional_test.go:1617: (dbg) Run:  kubectl --context functional-695625 describe po hello-node-connect
functional_test.go:1617: (dbg) Non-zero exit: kubectl --context functional-695625 describe po hello-node-connect: exit status 1 (1m0.058987067s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods hello-node-connect)

                                                
                                                
** /stderr **
functional_test.go:1619: "kubectl --context functional-695625 describe po hello-node-connect" failed: exit status 1
functional_test.go:1621: hello-node pod describe:
functional_test.go:1623: (dbg) Run:  kubectl --context functional-695625 logs -l app=hello-node-connect
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-695625 logs -l app=hello-node-connect: signal: killed (59.946795041s)
functional_test.go:1625: "kubectl --context functional-695625 logs -l app=hello-node-connect" failed: signal: killed
functional_test.go:1627: hello-node logs:
functional_test.go:1629: (dbg) Run:  kubectl --context functional-695625 describe svc hello-node-connect
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-695625 describe svc hello-node-connect: context deadline exceeded (1.908µs)
functional_test.go:1631: "kubectl --context functional-695625 describe svc hello-node-connect" failed: context deadline exceeded
functional_test.go:1633: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (16.155052001s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.531193214s)
helpers_test.go:261: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-695625 ssh cat /mount-9p/test-1766992710427127984                                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount2                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount3                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ mount   │ -p functional-695625 --kill=true                                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ addons  │ functional-695625 addons list                                                                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:20 UTC │ 29 Dec 25 07:20 UTC │
	│ addons  │ functional-695625 addons list -o json                                                                                            │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:20 UTC │ 29 Dec 25 07:20 UTC │
	│ service │ functional-695625 service list                                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service │ functional-695625 service list -o json                                                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start   │ -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start   │ -p functional-695625 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service │ functional-695625 service --namespace=default --https --url hello-node                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service │ functional-695625 service hello-node --url --format={{.IP}}                                                                      │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service │ functional-695625 service hello-node --url                                                                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:21:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:21:52.951895   25207 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:21:52.951994   25207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.952000   25207 out.go:374] Setting ErrFile to fd 2...
	I1229 07:21:52.952005   25207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.952198   25207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:21:52.952599   25207 out.go:368] Setting JSON to false
	I1229 07:21:52.953416   25207 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3863,"bootTime":1766989050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:21:52.953473   25207 start.go:143] virtualization: kvm guest
	I1229 07:21:52.955852   25207 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:21:52.957368   25207 notify.go:221] Checking for updates...
	I1229 07:21:52.957449   25207 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:21:52.958966   25207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:21:52.960365   25207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:21:52.962039   25207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:21:52.963554   25207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:21:52.965380   25207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:21:52.967500   25207 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:21:52.968091   25207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:21:53.001192   25207 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:21:53.002517   25207 start.go:309] selected driver: kvm2
	I1229 07:21:53.002537   25207 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:53.002635   25207 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:21:53.003697   25207 cni.go:84] Creating CNI manager for ""
	I1229 07:21:53.003813   25207 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:21:53.003869   25207 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:53.005457   25207 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> kernel <==
	 07:24:03 up 31 min,  0 users,  load average: 0.10, 0.09, 0.08
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (16.927580221s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (247.68s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (347.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
helpers_test.go:338: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
helpers_test.go:338: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
helpers_test.go:338: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.121:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.706432514s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.794674937s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
E1229 07:23:43.092664   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.52446215s)
helpers_test.go:261: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-695625 ssh cat /mount-9p/test-1766992710427127984                                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount2                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount3                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ mount   │ -p functional-695625 --kill=true                                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ addons  │ functional-695625 addons list                                                                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:20 UTC │ 29 Dec 25 07:20 UTC │
	│ addons  │ functional-695625 addons list -o json                                                                                            │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:20 UTC │ 29 Dec 25 07:20 UTC │
	│ service │ functional-695625 service list                                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service │ functional-695625 service list -o json                                                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start   │ -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                                    │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ start   │ -p functional-695625 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                              │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:21 UTC │                     │
	│ service │ functional-695625 service --namespace=default --https --url hello-node                                                           │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service │ functional-695625 service hello-node --url --format={{.IP}}                                                                      │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	│ service │ functional-695625 service hello-node --url                                                                                       │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:22 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:21:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:21:52.951895   25207 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:21:52.951994   25207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.952000   25207 out.go:374] Setting ErrFile to fd 2...
	I1229 07:21:52.952005   25207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.952198   25207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:21:52.952599   25207 out.go:368] Setting JSON to false
	I1229 07:21:52.953416   25207 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3863,"bootTime":1766989050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:21:52.953473   25207 start.go:143] virtualization: kvm guest
	I1229 07:21:52.955852   25207 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:21:52.957368   25207 notify.go:221] Checking for updates...
	I1229 07:21:52.957449   25207 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:21:52.958966   25207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:21:52.960365   25207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:21:52.962039   25207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:21:52.963554   25207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:21:52.965380   25207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:21:52.967500   25207 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:21:52.968091   25207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:21:53.001192   25207 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:21:53.002517   25207 start.go:309] selected driver: kvm2
	I1229 07:21:53.002537   25207 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:53.002635   25207 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:21:53.003697   25207 cni.go:84] Creating CNI manager for ""
	I1229 07:21:53.003813   25207 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:21:53.003869   25207 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:53.005457   25207 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	[Dec29 07:24] kauditd_printk_skb: 259 callbacks suppressed
	
	
	==> kernel <==
	 07:24:40 up 32 min,  0 users,  load average: 0.11, 0.09, 0.08
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.738577943s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (347.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (160.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-695625 replace --force -f testdata/mysql.yaml
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-695625 replace --force -f testdata/mysql.yaml: exit status 1 (1m8.065581472s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): error when deleting "testdata/mysql.yaml": Timeout: request did not complete within requested timeout - context deadline exceeded
	Error from server (Timeout): error when deleting "testdata/mysql.yaml": Timeout: request did not complete within requested timeout - context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:1805: failed to kubectl replace mysql: args "kubectl --context functional-695625 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.693859895s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.550462486s)
helpers_test.go:261: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-695625 ssh sudo cat /etc/ssl/certs/51391683.0                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /etc/ssl/certs/134862.pem                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /usr/share/ca-certificates/134862.pem                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /etc/test/nested/copy/13486/hosts                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdany-port1045147339/001:/mount-9p --alsologtostderr -v=1                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh cat /mount-9p/test-1766992710427127984                                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount2                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount3                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ mount   │ -p functional-695625 --kill=true                                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:11.823825   21144 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:11.824087   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824091   21144 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:11.824094   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824292   21144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:09:11.824739   21144 out.go:368] Setting JSON to false
	I1229 07:09:11.825573   21144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3102,"bootTime":1766989050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:09:11.825626   21144 start.go:143] virtualization: kvm guest
	I1229 07:09:11.828181   21144 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:09:11.830065   21144 notify.go:221] Checking for updates...
	I1229 07:09:11.830099   21144 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:09:11.832513   21144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:11.834171   21144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:09:11.835714   21144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:09:11.837182   21144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:09:11.838613   21144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:09:11.840293   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:11.840375   21144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:11.872577   21144 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:09:11.874034   21144 start.go:309] selected driver: kvm2
	I1229 07:09:11.874043   21144 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.874148   21144 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:09:11.875008   21144 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:09:11.875031   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:11.875088   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:11.875135   21144 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.875236   21144 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:09:11.877238   21144 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 07:09:11.878662   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:11.878689   21144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 07:09:11.878696   21144 cache.go:65] Caching tarball of preloaded images
	I1229 07:09:11.878855   21144 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:09:11.878865   21144 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:09:11.878973   21144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 07:09:11.879179   21144 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:09:11.879222   21144 start.go:364] duration metric: took 30.478µs to acquireMachinesLock for "functional-695625"
	I1229 07:09:11.879237   21144 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:09:11.879242   21144 fix.go:54] fixHost starting: 
	I1229 07:09:11.881414   21144 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 07:09:11.881433   21144 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:09:11.883307   21144 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 07:09:11.883328   21144 machine.go:94] provisionDockerMachine start ...
	I1229 07:09:11.886670   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887242   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.887262   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887496   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.887732   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.887736   21144 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:09:11.991696   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:11.991727   21144 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 07:09:11.994978   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995530   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.995549   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995737   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.995938   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.995945   21144 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 07:09:12.119417   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:12.122745   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.123300   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123538   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.123821   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.123838   21144 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:09:12.232450   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.232465   21144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:09:12.232498   21144 buildroot.go:174] setting up certificates
	I1229 07:09:12.232516   21144 provision.go:84] configureAuth start
	I1229 07:09:12.235023   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.235391   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.235407   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.237672   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238025   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.238038   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238176   21144 provision.go:143] copyHostCerts
	I1229 07:09:12.238217   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:09:12.238229   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:09:12.238296   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:09:12.238403   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:09:12.238407   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:09:12.238432   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:09:12.238491   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:09:12.238499   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:09:12.238520   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:09:12.238615   21144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 07:09:12.295310   21144 provision.go:177] copyRemoteCerts
	I1229 07:09:12.295367   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:09:12.298377   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.298846   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.298865   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.299023   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.383429   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:09:12.418297   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:09:12.449932   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:09:12.480676   21144 provision.go:87] duration metric: took 248.13234ms to configureAuth
	I1229 07:09:12.480699   21144 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:09:12.480912   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:12.483638   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484264   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.484283   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484490   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.484748   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.484754   21144 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:09:12.588448   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:09:12.588460   21144 buildroot.go:70] root file system type: tmpfs
	I1229 07:09:12.588547   21144 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:09:12.591297   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591753   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.591783   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591962   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.592154   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.592188   21144 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:09:12.712416   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:09:12.715731   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716163   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.716179   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716376   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.716633   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.716644   21144 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:09:12.827453   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.827470   21144 machine.go:97] duration metric: took 944.134387ms to provisionDockerMachine
	I1229 07:09:12.827483   21144 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 07:09:12.827495   21144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:09:12.827561   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:09:12.831103   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831472   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.831495   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831644   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.914033   21144 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:09:12.918908   21144 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:09:12.918929   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:09:12.919006   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:09:12.919110   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:09:12.919214   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 07:09:12.919253   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 07:09:12.931251   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:12.961219   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 07:09:12.994141   21144 start.go:296] duration metric: took 166.645883ms for postStartSetup
	I1229 07:09:12.994171   21144 fix.go:56] duration metric: took 1.114929026s for fixHost
	I1229 07:09:12.997310   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997695   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.997713   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997933   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.998123   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.998127   21144 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:09:13.101274   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766992153.087186683
	
	I1229 07:09:13.101290   21144 fix.go:216] guest clock: 1766992153.087186683
	I1229 07:09:13.101310   21144 fix.go:229] Guest: 2025-12-29 07:09:13.087186683 +0000 UTC Remote: 2025-12-29 07:09:12.994173684 +0000 UTC m=+1.216768593 (delta=93.012999ms)
	I1229 07:09:13.101325   21144 fix.go:200] guest clock delta is within tolerance: 93.012999ms
	I1229 07:09:13.101328   21144 start.go:83] releasing machines lock for "functional-695625", held for 1.222099797s
	I1229 07:09:13.104421   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.104778   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.104809   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.105311   21144 ssh_runner.go:195] Run: cat /version.json
	I1229 07:09:13.105384   21144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:09:13.108188   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108465   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.108487   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108626   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.108649   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.109293   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109456   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.188367   21144 ssh_runner.go:195] Run: systemctl --version
	I1229 07:09:13.214864   21144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:09:13.221871   21144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:09:13.221939   21144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:09:13.234387   21144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:09:13.234409   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.234439   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.234555   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.265557   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:09:13.279647   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:09:13.292829   21144 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:09:13.292880   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:09:13.305636   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.318870   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:09:13.332057   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.345233   21144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:09:13.358882   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:09:13.371537   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:09:13.384369   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:09:13.398107   21144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:09:13.409570   21144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:09:13.422369   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:13.586635   21144 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:09:13.630254   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.630285   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.630342   21144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:09:13.649562   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.669222   21144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:09:13.690312   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.709458   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:09:13.726376   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.751705   21144 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:09:13.756140   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:09:13.768404   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:09:13.789872   21144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:09:13.962749   21144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:09:14.121274   21144 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:09:14.121382   21144 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:09:14.144014   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:09:14.159574   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:14.318011   21144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:09:14.810377   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:09:14.828678   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:09:14.845136   21144 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:09:14.867262   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:14.884057   21144 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:09:15.045033   21144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:09:15.204271   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.357839   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:09:15.393570   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:09:15.410289   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.569395   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:09:15.702195   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:15.721913   21144 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:09:15.721983   21144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:09:15.728173   21144 start.go:574] Will wait 60s for crictl version
	I1229 07:09:15.728240   21144 ssh_runner.go:195] Run: which crictl
	I1229 07:09:15.732532   21144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:09:15.768758   21144 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:09:15.768832   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.798391   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.827196   21144 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:09:15.830472   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.830929   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:15.830951   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.831160   21144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:09:15.838098   21144 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1229 07:09:15.839808   21144 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:09:15.839935   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:15.840017   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.861298   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.861312   21144 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:09:15.861369   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.881522   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.881540   21144 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:09:15.881547   21144 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 07:09:15.881633   21144 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:09:15.881681   21144 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:09:15.935676   21144 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1229 07:09:15.935701   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:15.935727   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:15.935738   21144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:09:15.935764   21144 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:09:15.935924   21144 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:09:15.935984   21144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:09:15.948561   21144 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:09:15.948636   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:09:15.961301   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 07:09:15.983422   21144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:09:16.005682   21144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1229 07:09:16.029474   21144 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 07:09:16.034228   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:16.201925   21144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:09:16.221870   21144 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 07:09:16.221886   21144 certs.go:195] generating shared ca certs ...
	I1229 07:09:16.221906   21144 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:09:16.222138   21144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:09:16.222204   21144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:09:16.222214   21144 certs.go:257] generating profile certs ...
	I1229 07:09:16.222330   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 07:09:16.222384   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 07:09:16.222444   21144 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 07:09:16.222593   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:09:16.222640   21144 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:09:16.222649   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:09:16.222683   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:09:16.222732   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:09:16.222762   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:09:16.222857   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:16.223814   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:09:16.259745   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:09:16.289889   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:09:16.326260   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:09:16.358438   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:09:16.390832   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:09:16.422104   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:09:16.453590   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:09:16.484628   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:09:16.515097   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:09:16.545423   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:09:16.576428   21144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:09:16.598198   21144 ssh_runner.go:195] Run: openssl version
	I1229 07:09:16.604919   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.616843   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:09:16.628930   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634304   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634358   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.642266   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:09:16.654506   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.666895   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:09:16.678959   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684549   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684610   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.692570   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:09:16.704782   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.717059   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:09:16.728888   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734067   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734122   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.741254   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:09:16.753067   21144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:09:16.758242   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:09:16.765682   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:09:16.773077   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:09:16.780312   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:09:16.787576   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:09:16.794989   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:09:16.802975   21144 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:16.803131   21144 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:09:16.822479   21144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:09:16.835464   21144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:09:16.847946   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:16.859599   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:16.859610   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:16.859660   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:16.871193   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:16.871261   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:16.883523   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:16.896218   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:16.896282   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:16.911191   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.924861   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:16.924909   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.944303   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:16.962588   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:16.962645   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:16.977278   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.182115   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.182201   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.182249   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.182388   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.182445   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.182524   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.184031   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.184089   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	W1229 07:09:17.184195   21144 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:09:17.184268   21144 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:09:17.243543   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:09:17.260150   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:17.273154   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:17.273164   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:17.273225   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:17.284873   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:17.284932   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:17.296898   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:17.307707   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:17.307770   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:17.320033   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.331276   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:17.331337   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.342966   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:17.354640   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:17.354687   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:17.366632   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.552872   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.552925   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.552984   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.553138   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.553225   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.553323   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.554897   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.554936   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:09:17.555003   21144 kubeadm.go:403] duration metric: took 752.035112ms to StartCluster
	I1229 07:09:17.555040   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:09:17.555086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:09:17.587966   21144 cri.go:96] found id: ""
	I1229 07:09:17.587989   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.587998   21144 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:09:17.588005   21144 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:09:17.588086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:09:17.620754   21144 cri.go:96] found id: ""
	I1229 07:09:17.620772   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.620788   21144 logs.go:284] No container was found matching "etcd"
	I1229 07:09:17.620811   21144 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:09:17.620876   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:09:17.652137   21144 cri.go:96] found id: ""
	I1229 07:09:17.652158   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.652168   21144 logs.go:284] No container was found matching "coredns"
	I1229 07:09:17.652174   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:09:17.652227   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:09:17.684490   21144 cri.go:96] found id: ""
	I1229 07:09:17.684506   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.684514   21144 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:09:17.684520   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:09:17.684583   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:09:17.716008   21144 cri.go:96] found id: ""
	I1229 07:09:17.716024   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.716031   21144 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:09:17.716036   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:09:17.716108   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:09:17.749478   21144 cri.go:96] found id: ""
	I1229 07:09:17.749496   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.749504   21144 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:09:17.749511   21144 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:09:17.749573   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:09:17.781397   21144 cri.go:96] found id: ""
	I1229 07:09:17.781414   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.781421   21144 logs.go:284] No container was found matching "kindnet"
	I1229 07:09:17.781425   21144 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:09:17.781474   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:09:17.813076   21144 cri.go:96] found id: ""
	I1229 07:09:17.813093   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.813116   21144 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:09:17.813127   21144 logs.go:123] Gathering logs for container status ...
	I1229 07:09:17.813139   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:09:17.855147   21144 logs.go:123] Gathering logs for kubelet ...
	I1229 07:09:17.855165   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:09:17.931133   21144 logs.go:123] Gathering logs for dmesg ...
	I1229 07:09:17.931160   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:09:17.948718   21144 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:09:17.948738   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:10:18.030239   21144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.08145445s)
	W1229 07:10:18.030299   21144 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 07:10:18.030308   21144 logs.go:123] Gathering logs for Docker ...
	I1229 07:10:18.030323   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1229 07:10:18.094027   21144 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:10:18.094084   21144 out.go:285] * 
	W1229 07:10:18.094156   21144 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.094163   21144 out.go:285] * 
	W1229 07:10:18.094381   21144 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:10:18.097345   21144 out.go:203] 
	W1229 07:10:18.098865   21144 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.098945   21144 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 07:10:18.098968   21144 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 07:10:18.100537   21144 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> kernel <==
	 07:20:48 up 28 min,  0 users,  load average: 0.19, 0.07, 0.07
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.748057251s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (160.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (152.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-695625 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-695625 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (1m0.070753701s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-695625 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-695625 -n functional-695625: exit status 2 (15.846722993s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs -n 25: (1m0.556825837s)
helpers_test.go:261: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-695625 ssh sudo cat /etc/ssl/certs/51391683.0                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /etc/ssl/certs/134862.pem                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /usr/share/ca-certificates/134862.pem                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh sudo cat /etc/test/nested/copy/13486/hosts                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdany-port1045147339/001:/mount-9p --alsologtostderr -v=1                  │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh cat /mount-9p/test-1766992710427127984                                                                     │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:18 UTC │ 29 Dec 25 07:18 UTC │
	│ ssh     │ functional-695625 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh -- ls -la /mount-9p                                                                                        │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh sudo umount -f /mount-9p                                                                                   │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ mount   │ -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1               │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	│ ssh     │ functional-695625 ssh findmnt -T /mount1                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount2                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ ssh     │ functional-695625 ssh findmnt -T /mount3                                                                                         │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │ 29 Dec 25 07:19 UTC │
	│ mount   │ -p functional-695625 --kill=true                                                                                                 │ functional-695625 │ jenkins │ v1.37.0 │ 29 Dec 25 07:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:11.823825   21144 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:11.824087   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824091   21144 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:11.824094   21144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:11.824292   21144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:09:11.824739   21144 out.go:368] Setting JSON to false
	I1229 07:09:11.825573   21144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3102,"bootTime":1766989050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:09:11.825626   21144 start.go:143] virtualization: kvm guest
	I1229 07:09:11.828181   21144 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:09:11.830065   21144 notify.go:221] Checking for updates...
	I1229 07:09:11.830099   21144 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:09:11.832513   21144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:11.834171   21144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:09:11.835714   21144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:09:11.837182   21144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:09:11.838613   21144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:09:11.840293   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:11.840375   21144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:11.872577   21144 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:09:11.874034   21144 start.go:309] selected driver: kvm2
	I1229 07:09:11.874043   21144 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.874148   21144 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:09:11.875008   21144 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:09:11.875031   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:11.875088   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:11.875135   21144 start.go:353] cluster config:
	{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:11.875236   21144 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:09:11.877238   21144 out.go:179] * Starting "functional-695625" primary control-plane node in "functional-695625" cluster
	I1229 07:09:11.878662   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:11.878689   21144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 07:09:11.878696   21144 cache.go:65] Caching tarball of preloaded images
	I1229 07:09:11.878855   21144 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:09:11.878865   21144 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:09:11.878973   21144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/config.json ...
	I1229 07:09:11.879179   21144 start.go:360] acquireMachinesLock for functional-695625: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:09:11.879222   21144 start.go:364] duration metric: took 30.478µs to acquireMachinesLock for "functional-695625"
	I1229 07:09:11.879237   21144 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:09:11.879242   21144 fix.go:54] fixHost starting: 
	I1229 07:09:11.881414   21144 fix.go:112] recreateIfNeeded on functional-695625: state=Running err=<nil>
	W1229 07:09:11.881433   21144 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:09:11.883307   21144 out.go:252] * Updating the running kvm2 "functional-695625" VM ...
	I1229 07:09:11.883328   21144 machine.go:94] provisionDockerMachine start ...
	I1229 07:09:11.886670   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887242   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.887262   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.887496   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.887732   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.887736   21144 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:09:11.991696   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:11.991727   21144 buildroot.go:166] provisioning hostname "functional-695625"
	I1229 07:09:11.994978   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995530   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:11.995549   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:11.995737   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:11.995938   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:11.995945   21144 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-695625 && echo "functional-695625" | sudo tee /etc/hostname
	I1229 07:09:12.119417   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-695625
	
	I1229 07:09:12.122745   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.123300   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.123538   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.123821   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.123838   21144 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-695625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-695625/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-695625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:09:12.232450   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.232465   21144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:09:12.232498   21144 buildroot.go:174] setting up certificates
	I1229 07:09:12.232516   21144 provision.go:84] configureAuth start
	I1229 07:09:12.235023   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.235391   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.235407   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.237672   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238025   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.238038   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.238176   21144 provision.go:143] copyHostCerts
	I1229 07:09:12.238217   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:09:12.238229   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:09:12.238296   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:09:12.238403   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:09:12.238407   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:09:12.238432   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:09:12.238491   21144 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:09:12.238499   21144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:09:12.238520   21144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:09:12.238615   21144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.functional-695625 san=[127.0.0.1 192.168.39.121 functional-695625 localhost minikube]
	I1229 07:09:12.295310   21144 provision.go:177] copyRemoteCerts
	I1229 07:09:12.295367   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:09:12.298377   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.298846   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.298865   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.299023   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.383429   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:09:12.418297   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:09:12.449932   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:09:12.480676   21144 provision.go:87] duration metric: took 248.13234ms to configureAuth
	I1229 07:09:12.480699   21144 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:09:12.480912   21144 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:09:12.483638   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484264   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.484283   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.484490   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.484748   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.484754   21144 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:09:12.588448   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:09:12.588460   21144 buildroot.go:70] root file system type: tmpfs
	I1229 07:09:12.588547   21144 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:09:12.591297   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591753   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.591783   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.591962   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.592154   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.592188   21144 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:09:12.712416   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:09:12.715731   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716163   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.716179   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.716376   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.716633   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.716644   21144 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:09:12.827453   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:09:12.827470   21144 machine.go:97] duration metric: took 944.134387ms to provisionDockerMachine
	I1229 07:09:12.827483   21144 start.go:293] postStartSetup for "functional-695625" (driver="kvm2")
	I1229 07:09:12.827495   21144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:09:12.827561   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:09:12.831103   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831472   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.831495   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.831644   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:12.914033   21144 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:09:12.918908   21144 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:09:12.918929   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:09:12.919006   21144 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:09:12.919110   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:09:12.919214   21144 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts -> hosts in /etc/test/nested/copy/13486
	I1229 07:09:12.919253   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13486
	I1229 07:09:12.931251   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:12.961219   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts --> /etc/test/nested/copy/13486/hosts (40 bytes)
	I1229 07:09:12.994141   21144 start.go:296] duration metric: took 166.645883ms for postStartSetup
	I1229 07:09:12.994171   21144 fix.go:56] duration metric: took 1.114929026s for fixHost
	I1229 07:09:12.997310   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997695   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:12.997713   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:12.997933   21144 main.go:144] libmachine: Using SSH client type: native
	I1229 07:09:12.998123   21144 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1229 07:09:12.998127   21144 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:09:13.101274   21144 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766992153.087186683
	
	I1229 07:09:13.101290   21144 fix.go:216] guest clock: 1766992153.087186683
	I1229 07:09:13.101310   21144 fix.go:229] Guest: 2025-12-29 07:09:13.087186683 +0000 UTC Remote: 2025-12-29 07:09:12.994173684 +0000 UTC m=+1.216768593 (delta=93.012999ms)
	I1229 07:09:13.101325   21144 fix.go:200] guest clock delta is within tolerance: 93.012999ms
	I1229 07:09:13.101328   21144 start.go:83] releasing machines lock for "functional-695625", held for 1.222099797s
	I1229 07:09:13.104421   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.104778   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.104809   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.105311   21144 ssh_runner.go:195] Run: cat /version.json
	I1229 07:09:13.105384   21144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:09:13.108188   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108465   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.108487   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.108626   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.108649   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109272   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:13.109293   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:13.109456   21144 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
	I1229 07:09:13.188367   21144 ssh_runner.go:195] Run: systemctl --version
	I1229 07:09:13.214864   21144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:09:13.221871   21144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:09:13.221939   21144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:09:13.234387   21144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:09:13.234409   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.234439   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.234555   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.265557   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:09:13.279647   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:09:13.292829   21144 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:09:13.292880   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:09:13.305636   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.318870   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:09:13.332057   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:09:13.345233   21144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:09:13.358882   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:09:13.371537   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:09:13.384369   21144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:09:13.398107   21144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:09:13.409570   21144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:09:13.422369   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:13.586635   21144 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:09:13.630254   21144 start.go:496] detecting cgroup driver to use...
	I1229 07:09:13.630285   21144 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:09:13.630342   21144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:09:13.649562   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.669222   21144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:09:13.690312   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:09:13.709458   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:09:13.726376   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:09:13.751705   21144 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:09:13.756140   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:09:13.768404   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:09:13.789872   21144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:09:13.962749   21144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:09:14.121274   21144 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:09:14.121382   21144 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:09:14.144014   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:09:14.159574   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:14.318011   21144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:09:14.810377   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:09:14.828678   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:09:14.845136   21144 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:09:14.867262   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:14.884057   21144 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:09:15.045033   21144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:09:15.204271   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.357839   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:09:15.393570   21144 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:09:15.410289   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:15.569395   21144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:09:15.702195   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:09:15.721913   21144 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:09:15.721983   21144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:09:15.728173   21144 start.go:574] Will wait 60s for crictl version
	I1229 07:09:15.728240   21144 ssh_runner.go:195] Run: which crictl
	I1229 07:09:15.732532   21144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:09:15.768758   21144 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:09:15.768832   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.798391   21144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:09:15.827196   21144 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:09:15.830472   21144 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.830929   21144 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
	I1229 07:09:15.830951   21144 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
	I1229 07:09:15.831160   21144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:09:15.838098   21144 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1229 07:09:15.839808   21144 kubeadm.go:884] updating cluster {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:09:15.839935   21144 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:09:15.840017   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.861298   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.861312   21144 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:09:15.861369   21144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:09:15.881522   21144 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-695625
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1229 07:09:15.881540   21144 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:09:15.881547   21144 kubeadm.go:935] updating node { 192.168.39.121 8441 v1.35.0 docker true true} ...
	I1229 07:09:15.881633   21144 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-695625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:09:15.881681   21144 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:09:15.935676   21144 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1229 07:09:15.935701   21144 cni.go:84] Creating CNI manager for ""
	I1229 07:09:15.935727   21144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:09:15.935738   21144 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:09:15.935764   21144 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-695625 NodeName:functional-695625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:09:15.935924   21144 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-695625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:09:15.935984   21144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:09:15.948561   21144 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:09:15.948636   21144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:09:15.961301   21144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1229 07:09:15.983422   21144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:09:16.005682   21144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1229 07:09:16.029474   21144 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1229 07:09:16.034228   21144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:09:16.201925   21144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:09:16.221870   21144 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625 for IP: 192.168.39.121
	I1229 07:09:16.221886   21144 certs.go:195] generating shared ca certs ...
	I1229 07:09:16.221906   21144 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:09:16.222138   21144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:09:16.222204   21144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:09:16.222214   21144 certs.go:257] generating profile certs ...
	I1229 07:09:16.222330   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.key
	I1229 07:09:16.222384   21144 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key.a4651613
	I1229 07:09:16.222444   21144 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key
	I1229 07:09:16.222593   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:09:16.222640   21144 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:09:16.222649   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:09:16.222683   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:09:16.222732   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:09:16.222762   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:09:16.222857   21144 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:09:16.223814   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:09:16.259745   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:09:16.289889   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:09:16.326260   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:09:16.358438   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:09:16.390832   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:09:16.422104   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:09:16.453590   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:09:16.484628   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:09:16.515097   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:09:16.545423   21144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:09:16.576428   21144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:09:16.598198   21144 ssh_runner.go:195] Run: openssl version
	I1229 07:09:16.604919   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.616843   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:09:16.628930   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634304   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.634358   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:09:16.642266   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:09:16.654506   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.666895   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:09:16.678959   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684549   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.684610   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:09:16.692570   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:09:16.704782   21144 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.717059   21144 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:09:16.728888   21144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734067   21144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.734122   21144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:09:16.741254   21144 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:09:16.753067   21144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:09:16.758242   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:09:16.765682   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:09:16.773077   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:09:16.780312   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:09:16.787576   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:09:16.794989   21144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:09:16.802975   21144 kubeadm.go:401] StartCluster: {Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:16.803131   21144 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:09:16.822479   21144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:09:16.835464   21144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:09:16.847946   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:16.859599   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:16.859610   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:16.859660   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:16.871193   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:16.871261   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:16.883523   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:16.896218   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:16.896282   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:16.911191   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.924861   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:16.924909   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:16.944303   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:16.962588   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:16.962645   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:16.977278   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.182115   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.182201   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.182249   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.182388   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.182445   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.182524   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.184031   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.184089   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	W1229 07:09:17.184195   21144 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:09:17.184268   21144 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:09:17.243543   21144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:09:17.260150   21144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:09:17.273154   21144 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:09:17.273164   21144 kubeadm.go:158] found existing configuration files:
	
	I1229 07:09:17.273225   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1229 07:09:17.284873   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:09:17.284932   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:09:17.296898   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1229 07:09:17.307707   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:09:17.307770   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:09:17.320033   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.331276   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:09:17.331337   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:09:17.342966   21144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1229 07:09:17.354640   21144 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:09:17.354687   21144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:09:17.366632   21144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1229 07:09:17.552872   21144 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:09:17.552925   21144 kubeadm.go:319] [preflight] Some fatal errors occurred:
	I1229 07:09:17.552984   21144 kubeadm.go:319] 	[ERROR Port-8441]: Port 8441 is in use
	I1229 07:09:17.553138   21144 kubeadm.go:319] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1229 07:09:17.553225   21144 kubeadm.go:319] error: error execution phase preflight: preflight checks failed
	I1229 07:09:17.553323   21144 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:09:17.554897   21144 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:09:17.554936   21144 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:09:17.555003   21144 kubeadm.go:403] duration metric: took 752.035112ms to StartCluster
	I1229 07:09:17.555040   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:09:17.555086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:09:17.587966   21144 cri.go:96] found id: ""
	I1229 07:09:17.587989   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.587998   21144 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:09:17.588005   21144 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:09:17.588086   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:09:17.620754   21144 cri.go:96] found id: ""
	I1229 07:09:17.620772   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.620788   21144 logs.go:284] No container was found matching "etcd"
	I1229 07:09:17.620811   21144 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:09:17.620876   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:09:17.652137   21144 cri.go:96] found id: ""
	I1229 07:09:17.652158   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.652168   21144 logs.go:284] No container was found matching "coredns"
	I1229 07:09:17.652174   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:09:17.652227   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:09:17.684490   21144 cri.go:96] found id: ""
	I1229 07:09:17.684506   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.684514   21144 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:09:17.684520   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:09:17.684583   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:09:17.716008   21144 cri.go:96] found id: ""
	I1229 07:09:17.716024   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.716031   21144 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:09:17.716036   21144 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:09:17.716108   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:09:17.749478   21144 cri.go:96] found id: ""
	I1229 07:09:17.749496   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.749504   21144 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:09:17.749511   21144 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:09:17.749573   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:09:17.781397   21144 cri.go:96] found id: ""
	I1229 07:09:17.781414   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.781421   21144 logs.go:284] No container was found matching "kindnet"
	I1229 07:09:17.781425   21144 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:09:17.781474   21144 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:09:17.813076   21144 cri.go:96] found id: ""
	I1229 07:09:17.813093   21144 logs.go:282] 0 containers: []
	W1229 07:09:17.813116   21144 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:09:17.813127   21144 logs.go:123] Gathering logs for container status ...
	I1229 07:09:17.813139   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:09:17.855147   21144 logs.go:123] Gathering logs for kubelet ...
	I1229 07:09:17.855165   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:09:17.931133   21144 logs.go:123] Gathering logs for dmesg ...
	I1229 07:09:17.931160   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:09:17.948718   21144 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:09:17.948738   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:10:18.030239   21144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.08145445s)
	W1229 07:10:18.030299   21144 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1229 07:10:18.030308   21144 logs.go:123] Gathering logs for Docker ...
	I1229 07:10:18.030323   21144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1229 07:10:18.094027   21144 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:10:18.094084   21144 out.go:285] * 
	W1229 07:10:18.094156   21144 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.094163   21144 out.go:285] * 
	W1229 07:10:18.094381   21144 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:10:18.097345   21144 out.go:203] 
	W1229 07:10:18.098865   21144 out.go:285] X Exiting due to GUEST_PORT_IN_USE: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[preflight] Some fatal errors occurred:
		[ERROR Port-8441]: Port 8441 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	error: error execution phase preflight: preflight checks failed
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:10:18.098945   21144 out.go:285] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1229 07:10:18.098968   21144 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1229 07:10:18.100537   21144 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.752007897Z" level=info msg="Loading containers: done."
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767040399Z" level=info msg="Docker daemon" commit=89c5e8f containerd-snapshotter=false storage-driver=overlay2 version=28.5.2
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.767118717Z" level=info msg="Initializing buildkit"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.788494375Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794133927Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794208259Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794363772Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:09:14 functional-695625 dockerd[11495]: time="2025-12-29T07:09:14.794367632Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:09:14 functional-695625 systemd[1]: Started Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 29 07:09:14 functional-695625 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Dec 29 07:09:14 functional-695625 systemd[1]: cri-docker.service: Consumed 4.681s CPU time, 18.9M memory peak.
	Dec 29 07:09:15 functional-695625 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting cri-dockerd 0.4.1 (55d6e1a1d6f2ee58949e13a0c66afe7d779ac942)"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:09:15 functional-695625 cri-dockerd[11858]: time="2025-12-29T07:09:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:09:15 functional-695625 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.005634] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.202886] crun[405]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.971059] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.268875] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.123569] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.099711] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.170782] kauditd_printk_skb: 221 callbacks suppressed
	[  +0.199839] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025660] kauditd_printk_skb: 318 callbacks suppressed
	[Dec29 06:53] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.204939] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.333829] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +4.976278] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.830497] kauditd_printk_skb: 396 callbacks suppressed
	[  +5.294312] kauditd_printk_skb: 231 callbacks suppressed
	[Dec29 06:56] kauditd_printk_skb: 36 callbacks suppressed
	[ +10.952068] kauditd_printk_skb: 66 callbacks suppressed
	[ +20.880271] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:57] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 06:58] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.672596] kauditd_printk_skb: 14 callbacks suppressed
	[Dec29 07:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> kernel <==
	 07:20:40 up 28 min,  0 users,  load average: 0.14, 0.05, 0.06
	Linux functional-695625 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Dec 29 06:58:43 functional-695625 kubelet[6517]: E1229 06:58:43.503356    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.519860    6517 scope.go:122] "RemoveContainer" containerID="b206d555ad194fa8eb29c391078f91a79baa156c83dfbefe92f8772dfc2c4cbc"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.520985    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521063    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: I1229 06:58:44.521079    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:44 functional-695625 kubelet[6517]: E1229 06:58:44.521196    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537487    6517 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-695625\" not found" node="functional-695625"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537563    6517 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-695625" containerName="kube-apiserver"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: I1229 06:58:45.537579    6517 scope.go:122] "RemoveContainer" containerID="07a17306156372940966dc7c7e00122a99f1c0f6e78ddc5e4c0cb67f3cff1817"
	Dec 29 06:58:45 functional-695625 kubelet[6517]: E1229 06:58:45.537686    6517 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-functional-695625_kube-system(d173c000af26dcef62569d3a5345fcae)\"" pod="kube-system/kube-apiserver-functional-695625" podUID="d173c000af26dcef62569d3a5345fcae"
	Dec 29 06:58:46 functional-695625 kubelet[6517]: E1229 06:58:46.747043    6517 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.121:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-695625?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 06:58:49 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 29 06:58:49 functional-695625 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 30.3M memory peak.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12046]: E1229 07:09:16.288680   12046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 29 07:09:16 functional-695625 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 29 07:09:16 functional-695625 kubelet[12175]: E1229 07:09:16.952765   12175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:09:16 functional-695625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:09:17 functional-695625 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-695625 -n functional-695625: exit status 2 (15.895116779s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-695625" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (152.39s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (16.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-695625 docker-env) && out/minikube-linux-amd64 status -p functional-695625"
functional_test.go:514: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-695625 docker-env) && out/minikube-linux-amd64 status -p functional-695625": exit status 2 (16.031327062s)

                                                
                                                
-- stdout --
	functional-695625
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 2
--- FAIL: TestFunctional/parallel/DockerEnv/bash (16.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (35.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdany-port1045147339/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766992710427127984" to /tmp/TestFunctionalparallelMountCmdany-port1045147339/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766992710427127984" to /tmp/TestFunctionalparallelMountCmdany-port1045147339/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766992710427127984" to /tmp/TestFunctionalparallelMountCmdany-port1045147339/001/test-1766992710427127984
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (147.356107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 07:18:30.574782   13486 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 29 07:18 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 29 07:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 29 07:18 test-1766992710427127984
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh cat /mount-9p/test-1766992710427127984
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-695625 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) Non-zero exit: kubectl --context functional-695625 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (34.058779073s)

                                                
                                                
** stderr ** 
	Error from server (Timeout): error when deleting "testdata/busybox-mount-test.yaml": Timeout: request did not complete within requested timeout - context deadline exceeded

                                                
                                                
** /stderr **
functional_test_mount_test.go:151: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-695625 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:81: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:82: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:82: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (158.504802ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=1000,access=any,msize=262144,trans=tcp,noextend,port=43587)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 29 07:18 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 29 07:18 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 29 07:18 test-1766992710427127984
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:84: debugging command "out/minikube-linux-amd64 -p functional-695625 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdany-port1045147339/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdany-port1045147339/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port1045147339/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:43587
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port1045147339/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdany-port1045147339/001:/mount-9p --alsologtostderr -v=1] stderr:
I1229 07:18:30.483418   23982 out.go:360] Setting OutFile to fd 1 ...
I1229 07:18:30.483661   23982 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:18:30.483668   23982 out.go:374] Setting ErrFile to fd 2...
I1229 07:18:30.483672   23982 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:18:30.483927   23982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:18:30.484179   23982 mustload.go:66] Loading cluster: functional-695625
I1229 07:18:30.484478   23982 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:18:30.486421   23982 host.go:66] Checking if "functional-695625" exists ...
I1229 07:18:30.489280   23982 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:18:30.489818   23982 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:18:30.489865   23982 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:18:30.492692   23982 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port1045147339/001 into VM as /mount-9p ...
I1229 07:18:30.494380   23982 out.go:179]   - Mount type:   9p
I1229 07:18:30.495719   23982 out.go:179]   - User ID:      docker
I1229 07:18:30.497170   23982 out.go:179]   - Group ID:     docker
I1229 07:18:30.498627   23982 out.go:179]   - Version:      9p2000.L
I1229 07:18:30.499889   23982 out.go:179]   - Message Size: 262144
I1229 07:18:30.501098   23982 out.go:179]   - Options:      map[]
I1229 07:18:30.502893   23982 out.go:179]   - Bind Address: 192.168.39.1:43587
I1229 07:18:30.504398   23982 out.go:179] * Userspace file server: 
I1229 07:18:30.504546   23982 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1229 07:18:30.508656   23982 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:18:30.509125   23982 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 07:52:21 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:18:30.509155   23982 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:18:30.509337   23982 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:18:30.588813   23982 mount.go:180] unmount for /mount-9p ran successfully
I1229 07:18:30.588847   23982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1229 07:18:30.603022   23982 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43587,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I1229 07:18:30.635921   23982 main.go:127] stdlog: ufs.go:141 connected
I1229 07:18:30.636100   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tversion tag 65535 msize 262144 version '9P2000.L'
I1229 07:18:30.636160   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rversion tag 65535 msize 262144 version '9P2000'
I1229 07:18:30.636709   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1229 07:18:30.636844   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rattach tag 0 aqid (20fa085 68f90b1a 'd')
I1229 07:18:30.637302   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 0
I1229 07:18:30.637445   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa085 68f90b1a 'd') m d775 at 0 mt 1766992710 l 4096 t 0 d 0 ext )
I1229 07:18:30.638028   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 0
I1229 07:18:30.638139   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa085 68f90b1a 'd') m d775 at 0 mt 1766992710 l 4096 t 0 d 0 ext )
I1229 07:18:30.639318   23982 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/.mount-process: {Name:mk1a10bc6a131dc8bbfc0a9fb5bdf107b293f42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:18:30.639515   23982 mount.go:105] mount successful: ""
I1229 07:18:30.643929   23982 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port1045147339/001 to /mount-9p
I1229 07:18:30.645562   23982 out.go:203] 
I1229 07:18:30.646884   23982 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1229 07:18:31.219172   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 0
I1229 07:18:31.219333   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa085 68f90b1a 'd') m d775 at 0 mt 1766992710 l 4096 t 0 d 0 ext )
I1229 07:18:31.221330   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 1 
I1229 07:18:31.221382   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 
I1229 07:18:31.221831   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Topen tag 0 fid 1 mode 0
I1229 07:18:31.221923   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Ropen tag 0 qid (20fa085 68f90b1a 'd') iounit 0
I1229 07:18:31.222353   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 0
I1229 07:18:31.222464   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa085 68f90b1a 'd') m d775 at 0 mt 1766992710 l 4096 t 0 d 0 ext )
I1229 07:18:31.222996   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 0 count 262120
I1229 07:18:31.223276   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 258
I1229 07:18:31.223551   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 258 count 261862
I1229 07:18:31.223586   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:18:31.223878   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 258 count 262120
I1229 07:18:31.223921   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:18:31.224161   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1229 07:18:31.224200   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa087 68f90b1a '') 
I1229 07:18:31.224502   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.224615   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa087 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.224959   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.225060   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa087 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.225343   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.225389   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.225695   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1229 07:18:31.225734   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa087 68f90b1a '') 
I1229 07:18:31.226072   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.226160   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa087 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.226476   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.226504   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.226784   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1229 07:18:31.226855   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa086 68f90b1a '') 
I1229 07:18:31.227106   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.227183   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa086 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.227438   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.227541   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa086 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.227877   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.227902   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.228186   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1229 07:18:31.228230   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa086 68f90b1a '') 
I1229 07:18:31.228471   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.228547   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa086 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.228835   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.228869   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.229136   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'test-1766992710427127984' 
I1229 07:18:31.229177   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa088 68f90b1a '') 
I1229 07:18:31.229390   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.229487   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.229689   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.229783   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.230054   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.230084   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.230426   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'test-1766992710427127984' 
I1229 07:18:31.230457   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa088 68f90b1a '') 
I1229 07:18:31.230822   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:18:31.230900   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.231243   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.231271   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.231632   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 258 count 262120
I1229 07:18:31.231663   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:18:31.232009   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 1
I1229 07:18:31.232042   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.378946   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 1 0:'test-1766992710427127984' 
I1229 07:18:31.379030   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa088 68f90b1a '') 
I1229 07:18:31.379430   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 1
I1229 07:18:31.379548   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.379886   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 1 newfid 2 
I1229 07:18:31.379933   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 
I1229 07:18:31.380195   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Topen tag 0 fid 2 mode 0
I1229 07:18:31.380256   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Ropen tag 0 qid (20fa088 68f90b1a '') iounit 0
I1229 07:18:31.380514   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 1
I1229 07:18:31.380649   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:18:31.381070   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 2 offset 0 count 262120
I1229 07:18:31.381120   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 24
I1229 07:18:31.381480   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 2 offset 24 count 262120
I1229 07:18:31.381524   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:18:31.381918   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 2 offset 24 count 262120
I1229 07:18:31.381961   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:18:31.382283   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:18:31.382321   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:18:31.382572   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 1
I1229 07:18:31.382607   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.590333   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 0
I1229 07:19:05.590517   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa085 68f90b1a 'd') m d775 at 0 mt 1766992710 l 4096 t 0 d 0 ext )
I1229 07:19:05.592154   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 1 
I1229 07:19:05.592225   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 
I1229 07:19:05.592494   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Topen tag 0 fid 1 mode 0
I1229 07:19:05.592577   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Ropen tag 0 qid (20fa085 68f90b1a 'd') iounit 0
I1229 07:19:05.592778   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 0
I1229 07:19:05.592965   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa085 68f90b1a 'd') m d775 at 0 mt 1766992710 l 4096 t 0 d 0 ext )
I1229 07:19:05.593265   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 0 count 262120
I1229 07:19:05.593430   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 258
I1229 07:19:05.593640   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 258 count 261862
I1229 07:19:05.593685   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:19:05.593903   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 258 count 262120
I1229 07:19:05.593938   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:19:05.594105   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1229 07:19:05.594153   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa087 68f90b1a '') 
I1229 07:19:05.594406   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.594511   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa087 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.594720   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.594874   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa087 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.595064   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:19:05.595101   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.595261   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1229 07:19:05.595314   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa087 68f90b1a '') 
I1229 07:19:05.595499   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.595678   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa087 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.595890   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:19:05.595919   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.596110   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1229 07:19:05.596172   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa086 68f90b1a '') 
I1229 07:19:05.596352   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.596455   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa086 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.596630   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.596811   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa086 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.597034   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:19:05.597064   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.597259   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1229 07:19:05.597312   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa086 68f90b1a '') 
I1229 07:19:05.597528   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.597620   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa086 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.597879   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:19:05.597929   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.598134   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'test-1766992710427127984' 
I1229 07:19:05.598195   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa088 68f90b1a '') 
I1229 07:19:05.598427   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.598538   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.598831   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.598921   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.599207   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:19:05.599235   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.599466   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 2 0:'test-1766992710427127984' 
I1229 07:19:05.599505   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rwalk tag 0 (20fa088 68f90b1a '') 
I1229 07:19:05.599688   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tstat tag 0 fid 2
I1229 07:19:05.599790   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rstat tag 0 st ('test-1766992710427127984' 'jenkins' 'balintp' '' q (20fa088 68f90b1a '') m 644 at 0 mt 1766992710 l 24 t 0 d 0 ext )
I1229 07:19:05.600007   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 2
I1229 07:19:05.600056   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.600267   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tread tag 0 fid 1 offset 258 count 262120
I1229 07:19:05.600335   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rread tag 0 count 0
I1229 07:19:05.600509   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 1
I1229 07:19:05.600557   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.603375   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1229 07:19:05.603426   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rerror tag 0 ename 'file not found' ecode 0
I1229 07:19:05.754000   23982 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.121:46820 Tclunk tag 0 fid 0
I1229 07:19:05.754061   23982 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.121:46820 Rclunk tag 0
I1229 07:19:05.754582   23982 main.go:127] stdlog: ufs.go:147 disconnected
I1229 07:19:05.791888   23982 out.go:179] * Unmounting /mount-9p ...
I1229 07:19:05.793713   23982 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1229 07:19:05.801805   23982 mount.go:180] unmount for /mount-9p ran successfully
I1229 07:19:05.801928   23982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/.mount-process: {Name:mk1a10bc6a131dc8bbfc0a9fb5bdf107b293f42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:19:05.804067   23982 out.go:203] 
W1229 07:19:05.805607   23982 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1229 07:19:05.806981   23982 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (35.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (34.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-695625 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1456: (dbg) Non-zero exit: kubectl --context functional-695625 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server: exit status 1 (34.056895097s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Timeout: request did not complete within requested timeout - context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:1458: failed to create hello-node deployment with this command "kubectl --context functional-695625 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (34.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (15.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 service list
functional_test.go:1474: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 service list: exit status 103 (15.734713894s)

                                                
                                                
-- stdout --
	* The control-plane node functional-695625 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-695625"

                                                
                                                
-- /stdout --
functional_test.go:1476: failed to do service list. args "out/minikube-linux-amd64 -p functional-695625 service list" : exit status 103
functional_test.go:1479: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-695625 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-695625\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (15.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (15.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 service list -o json
functional_test.go:1504: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 service list -o json: exit status 103 (15.711424395s)

                                                
                                                
-- stdout --
	* The control-plane node functional-695625 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-695625"

                                                
                                                
-- /stdout --
functional_test.go:1506: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-695625 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (15.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 service --namespace=default --https --url hello-node
functional_test.go:1524: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 service --namespace=default --https --url hello-node: signal: killed (15.007508067s)
functional_test.go:1526: failed to get service url. args "out/minikube-linux-amd64 -p functional-695625 service --namespace=default --https --url hello-node" : signal: killed
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 service hello-node --url --format={{.IP}}
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 service hello-node --url --format={{.IP}}: signal: killed (15.006916506s)
functional_test.go:1557: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-695625 service hello-node --url --format={{.IP}}": signal: killed
functional_test.go:1563: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 service hello-node --url
functional_test.go:1574: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 service hello-node --url: signal: killed (15.00420131s)
functional_test.go:1576: failed to get service url. args: "out/minikube-linux-amd64 -p functional-695625 service hello-node --url": signal: killed
functional_test.go:1580: found endpoint for hello-node: 
functional_test.go:1588: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (65.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178114 --wait=true -v=5 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-178114 --wait=true -v=5 --alsologtostderr --driver=kvm2 : exit status 80 (1m3.45770645s)

                                                
                                                
-- stdout --
	* [multinode-178114] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "multinode-178114" primary control-plane node in "multinode-178114" cluster
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-178114-m02" worker node in "multinode-178114" cluster
	* Found network options:
	  - NO_PROXY=192.168.39.92
	  - env NO_PROXY=192.168.39.92
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:50:53.098266   38571 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:50:53.098361   38571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:50:53.098368   38571 out.go:374] Setting ErrFile to fd 2...
	I1229 07:50:53.098374   38571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:50:53.098604   38571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:50:53.099117   38571 out.go:368] Setting JSON to false
	I1229 07:50:53.100056   38571 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5603,"bootTime":1766989050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:50:53.100114   38571 start.go:143] virtualization: kvm guest
	I1229 07:50:53.102525   38571 out.go:179] * [multinode-178114] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:50:53.104522   38571 notify.go:221] Checking for updates...
	I1229 07:50:53.104576   38571 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:50:53.106200   38571 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:50:53.107500   38571 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:50:53.108759   38571 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:50:53.109861   38571 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:50:53.111279   38571 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:50:53.113096   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:50:53.113667   38571 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:50:53.150223   38571 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:50:53.151479   38571 start.go:309] selected driver: kvm2
	I1229 07:50:53.151499   38571 start.go:928] validating driver "kvm2" against &{Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:50:53.151664   38571 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:50:53.152644   38571 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:50:53.152681   38571 cni.go:84] Creating CNI manager for ""
	I1229 07:50:53.152750   38571 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I1229 07:50:53.152827   38571 start.go:353] cluster config:
	{Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:
false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:50:53.152959   38571 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:50:53.155573   38571 out.go:179] * Starting "multinode-178114" primary control-plane node in "multinode-178114" cluster
	I1229 07:50:53.156973   38571 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:50:53.157034   38571 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 07:50:53.157046   38571 cache.go:65] Caching tarball of preloaded images
	I1229 07:50:53.157192   38571 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:50:53.157208   38571 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:50:53.157372   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:50:53.157666   38571 start.go:360] acquireMachinesLock for multinode-178114: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:50:53.157723   38571 start.go:364] duration metric: took 32.734µs to acquireMachinesLock for "multinode-178114"
	I1229 07:50:53.157738   38571 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:50:53.157757   38571 fix.go:54] fixHost starting: 
	I1229 07:50:53.159786   38571 fix.go:112] recreateIfNeeded on multinode-178114: state=Stopped err=<nil>
	W1229 07:50:53.159832   38571 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:50:53.161485   38571 out.go:252] * Restarting existing kvm2 VM for "multinode-178114" ...
	I1229 07:50:53.161556   38571 main.go:144] libmachine: starting domain...
	I1229 07:50:53.161571   38571 main.go:144] libmachine: ensuring networks are active...
	I1229 07:50:53.162378   38571 main.go:144] libmachine: Ensuring network default is active
	I1229 07:50:53.162880   38571 main.go:144] libmachine: Ensuring network mk-multinode-178114 is active
	I1229 07:50:53.163394   38571 main.go:144] libmachine: getting domain XML...
	I1229 07:50:53.164599   38571 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-178114</name>
	  <uuid>ae972118-d57b-4c37-b972-fae087082f1e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/multinode-178114.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:52:d2:7c'/>
	      <source network='mk-multinode-178114'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:35:33:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1229 07:50:54.473390   38571 main.go:144] libmachine: waiting for domain to start...
	I1229 07:50:54.474941   38571 main.go:144] libmachine: domain is now running
	I1229 07:50:54.474966   38571 main.go:144] libmachine: waiting for IP...
	I1229 07:50:54.475914   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.476623   38571 main.go:144] libmachine: domain multinode-178114 has current primary IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.476642   38571 main.go:144] libmachine: found domain IP: 192.168.39.92
	I1229 07:50:54.476651   38571 main.go:144] libmachine: reserving static IP address...
	I1229 07:50:54.477234   38571 main.go:144] libmachine: found host DHCP lease matching {name: "multinode-178114", mac: "52:54:00:52:d2:7c", ip: "192.168.39.92"} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:48:35 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:50:54.477272   38571 main.go:144] libmachine: skip adding static IP to network mk-multinode-178114 - found existing host DHCP lease matching {name: "multinode-178114", mac: "52:54:00:52:d2:7c", ip: "192.168.39.92"}
	I1229 07:50:54.477285   38571 main.go:144] libmachine: reserved static IP address 192.168.39.92 for domain multinode-178114
	I1229 07:50:54.477298   38571 main.go:144] libmachine: waiting for SSH...
	I1229 07:50:54.477306   38571 main.go:144] libmachine: Getting to WaitForSSH function...
	I1229 07:50:54.480161   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.480824   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:48:35 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:50:54.480862   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.481093   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:50:54.481393   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:50:54.481407   38571 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1229 07:50:57.543142   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I1229 07:51:03.623087   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I1229 07:51:06.741989   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:06.745461   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.746106   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:06.746136   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.746466   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:06.746701   38571 machine.go:94] provisionDockerMachine start ...
	I1229 07:51:06.749276   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.749840   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:06.749884   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.750065   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:06.750258   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:06.750268   38571 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:51:06.858747   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1229 07:51:06.858781   38571 buildroot.go:166] provisioning hostname "multinode-178114"
	I1229 07:51:06.861684   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.862261   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:06.862287   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.862499   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:06.862693   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:06.862704   38571 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-178114 && echo "multinode-178114" | sudo tee /etc/hostname
	I1229 07:51:06.996666   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-178114
	
	I1229 07:51:06.999917   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.000487   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.000522   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.000743   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.000984   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.001007   38571 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-178114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-178114/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-178114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:51:07.125866   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:07.125903   38571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:51:07.125921   38571 buildroot.go:174] setting up certificates
	I1229 07:51:07.125959   38571 provision.go:84] configureAuth start
	I1229 07:51:07.128359   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.128734   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.128753   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.130742   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.131153   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.131173   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.131287   38571 provision.go:143] copyHostCerts
	I1229 07:51:07.131310   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:07.131339   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:51:07.131348   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:07.131416   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:51:07.131501   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:07.131526   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:51:07.131535   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:07.131562   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:51:07.131655   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:07.131675   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:51:07.131678   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:07.131703   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:51:07.131755   38571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.multinode-178114 san=[127.0.0.1 192.168.39.92 localhost minikube multinode-178114]
	I1229 07:51:07.153058   38571 provision.go:177] copyRemoteCerts
	I1229 07:51:07.153129   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:51:07.155873   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.156202   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.156226   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.156339   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:07.242577   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:51:07.242645   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1229 07:51:07.278317   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:51:07.278409   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:51:07.313100   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:51:07.313185   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:51:07.343069   38571 provision.go:87] duration metric: took 217.084373ms to configureAuth
	I1229 07:51:07.343103   38571 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:51:07.343308   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:07.345919   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.346237   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.346264   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.346399   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.346580   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.346590   38571 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:51:07.455352   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:51:07.455382   38571 buildroot.go:70] root file system type: tmpfs
	I1229 07:51:07.455540   38571 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:51:07.459073   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.459582   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.459610   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.459913   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.460123   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.460168   38571 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:51:07.585693   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:51:07.588880   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.589346   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.589372   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.589559   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.589742   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.589757   38571 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:51:08.671158   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1229 07:51:08.671184   38571 machine.go:97] duration metric: took 1.924469757s to provisionDockerMachine
	I1229 07:51:08.671206   38571 start.go:293] postStartSetup for "multinode-178114" (driver="kvm2")
	I1229 07:51:08.671217   38571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:51:08.671295   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:51:08.674391   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.674948   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.674985   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.675233   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:08.761896   38571 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:51:08.766628   38571 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:51:08.766662   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:51:08.766749   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:51:08.766881   38571 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:51:08.766896   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 07:51:08.767028   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:51:08.780296   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:08.811290   38571 start.go:296] duration metric: took 140.065036ms for postStartSetup
	I1229 07:51:08.811351   38571 fix.go:56] duration metric: took 15.653592801s for fixHost
	I1229 07:51:08.814267   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.814627   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.814651   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.814823   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:08.815124   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:08.815147   38571 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:51:08.924579   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766994668.888468870
	
	I1229 07:51:08.924604   38571 fix.go:216] guest clock: 1766994668.888468870
	I1229 07:51:08.924614   38571 fix.go:229] Guest: 2025-12-29 07:51:08.88846887 +0000 UTC Remote: 2025-12-29 07:51:08.811358033 +0000 UTC m=+15.762530033 (delta=77.110837ms)
	I1229 07:51:08.924636   38571 fix.go:200] guest clock delta is within tolerance: 77.110837ms
	I1229 07:51:08.924642   38571 start.go:83] releasing machines lock for "multinode-178114", held for 15.766910696s
	I1229 07:51:08.928536   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.929437   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.929475   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.930223   38571 ssh_runner.go:195] Run: cat /version.json
	I1229 07:51:08.930433   38571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:51:08.933593   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.933965   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.934165   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.934198   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.934379   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:08.934392   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.934442   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.934590   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:09.013986   38571 ssh_runner.go:195] Run: systemctl --version
	I1229 07:51:09.038743   38571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:51:09.045615   38571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:51:09.045689   38571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:51:09.065815   38571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:51:09.065840   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:09.065865   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:09.065958   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:09.089198   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:51:09.101721   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:51:09.114294   38571 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:51:09.114369   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:51:09.127454   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:09.140148   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:51:09.152891   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:09.166865   38571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:51:09.179867   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:51:09.192602   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:51:09.206123   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:51:09.219317   38571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:51:09.230222   38571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1229 07:51:09.230307   38571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1229 07:51:09.243761   38571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:51:09.255529   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:09.398450   38571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:51:09.437402   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:09.437451   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:09.437500   38571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:51:09.456156   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:09.474931   38571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:51:09.501676   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:09.520621   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:09.537483   38571 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:51:09.574468   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:09.590669   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:09.614033   38571 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:51:09.618443   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:51:09.630500   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:51:09.652449   38571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:51:09.801472   38571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:51:09.978337   38571 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:51:09.978456   38571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:51:10.001127   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:51:10.017264   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:10.162882   38571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:51:10.726562   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:51:10.742501   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:51:10.757500   38571 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:51:10.774644   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:10.790824   38571 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:51:10.934117   38571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:51:11.078908   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:11.224636   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:51:11.265331   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:51:11.282148   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:11.432086   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:51:11.553354   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:11.573972   38571 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:51:11.574048   38571 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:51:11.581008   38571 start.go:574] Will wait 60s for crictl version
	I1229 07:51:11.581096   38571 ssh_runner.go:195] Run: which crictl
	I1229 07:51:11.585569   38571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:51:11.621890   38571 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:51:11.621985   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:11.651921   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:11.679435   38571 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:51:11.682297   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:11.682717   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:11.682744   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:11.682947   38571 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:51:11.688071   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:11.703105   38571 kubeadm.go:884] updating cluster {Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fal
se kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:51:11.703275   38571 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:51:11.703323   38571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:51:11.723930   38571 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1229 07:51:11.723957   38571 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:51:11.724028   38571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:51:11.743395   38571 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1229 07:51:11.743420   38571 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:51:11.743430   38571 kubeadm.go:935] updating node { 192.168.39.92 8443 v1.35.0 docker true true} ...
	I1229 07:51:11.743533   38571 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-178114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:51:11.743587   38571 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:51:11.796519   38571 cni.go:84] Creating CNI manager for ""
	I1229 07:51:11.796545   38571 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I1229 07:51:11.796557   38571 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:51:11.796588   38571 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-178114 NodeName:multinode-178114 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:51:11.796758   38571 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-178114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:51:11.796859   38571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:51:11.809410   38571 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:51:11.809497   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:51:11.821673   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1229 07:51:11.843346   38571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:51:11.864417   38571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1229 07:51:11.886464   38571 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1229 07:51:11.890871   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:11.906877   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:12.052311   38571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:12.094371   38571 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114 for IP: 192.168.39.92
	I1229 07:51:12.094403   38571 certs.go:195] generating shared ca certs ...
	I1229 07:51:12.094425   38571 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:12.094630   38571 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:51:12.094734   38571 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:51:12.094767   38571 certs.go:257] generating profile certs ...
	I1229 07:51:12.094930   38571 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.key
	I1229 07:51:12.095005   38571 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.key.0309c206
	I1229 07:51:12.095075   38571 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.key
	I1229 07:51:12.095092   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:51:12.095114   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:51:12.095131   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:51:12.095149   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:51:12.095166   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:51:12.095186   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:51:12.095205   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:51:12.095230   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:51:12.095305   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:51:12.095350   38571 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:51:12.095364   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:51:12.095410   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:51:12.095449   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:51:12.095481   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:51:12.095536   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:12.095592   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.095615   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.095636   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.096355   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:51:12.132034   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:51:12.177430   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:51:12.216360   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:51:12.258275   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:51:12.291093   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:51:12.321598   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:51:12.352706   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:51:12.384224   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:51:12.414782   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:51:12.444889   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:51:12.475101   38571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:51:12.496952   38571 ssh_runner.go:195] Run: openssl version
	I1229 07:51:12.504099   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.515723   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:51:12.527980   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.533592   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.533668   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.541533   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:51:12.553380   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:51:12.565726   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.577658   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:51:12.589832   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.595615   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.595681   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.603515   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:51:12.615589   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13486.pem /etc/ssl/certs/51391683.0
	I1229 07:51:12.627899   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.639904   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:51:12.652275   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.657954   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.658062   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.665699   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:12.677962   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/134862.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:12.689811   38571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:51:12.695401   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:51:12.703343   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:51:12.711012   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:51:12.718870   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:51:12.726935   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:51:12.734500   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:51:12.742262   38571 kubeadm.go:401] StartCluster: {Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false
kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:51:12.742418   38571 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:51:12.763028   38571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:51:12.775756   38571 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:51:12.775778   38571 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:51:12.775842   38571 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:51:12.788013   38571 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:51:12.788448   38571 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-178114" does not appear in /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:51:12.788557   38571 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9552/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-178114" cluster setting kubeconfig missing "multinode-178114" context setting]
	I1229 07:51:12.788782   38571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9552/kubeconfig: {Name:mk3da1041c6a3b33ce0ec44cd869a994fb20af51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:12.789261   38571 kapi.go:59] client config for multinode-178114: &rest.Config{Host:"https://192.168.39.92:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:51:12.789676   38571 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:51:12.789692   38571 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:51:12.789696   38571 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:51:12.789701   38571 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:51:12.789704   38571 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:51:12.789711   38571 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:51:12.789828   38571 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 07:51:12.790025   38571 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:51:12.801675   38571 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.92
	I1229 07:51:12.801713   38571 kubeadm.go:1161] stopping kube-system containers ...
	I1229 07:51:12.801774   38571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:51:12.824583   38571 docker.go:487] Stopping containers: [fa2d4896312f 10cc8e4caf70 a4a4bb711fae 3892c6c0af78 3851ba81f40b d644dd7bd97e b96c765a8200 88eb1127f7bc 73a8ff257e15 4488bd802d84 639f398a163a d795c528b8a8 2dc0bafab6b5 763a73e3ba26 1092c045c2b8 537ac7bc94fd a093949edbf9 6a5c93b6883a e1490ac6bef8 11be3013b57b 445e364e9cff d61bdd98dbb5 4967abaafd4b e064edf96800 69799af14d2b cad4676f8501 5a2d698e85df ffef27606300 53e347908f7e e8220b5c7dc1 f207bbed0521]
	I1229 07:51:12.824679   38571 ssh_runner.go:195] Run: docker stop fa2d4896312f 10cc8e4caf70 a4a4bb711fae 3892c6c0af78 3851ba81f40b d644dd7bd97e b96c765a8200 88eb1127f7bc 73a8ff257e15 4488bd802d84 639f398a163a d795c528b8a8 2dc0bafab6b5 763a73e3ba26 1092c045c2b8 537ac7bc94fd a093949edbf9 6a5c93b6883a e1490ac6bef8 11be3013b57b 445e364e9cff d61bdd98dbb5 4967abaafd4b e064edf96800 69799af14d2b cad4676f8501 5a2d698e85df ffef27606300 53e347908f7e e8220b5c7dc1 f207bbed0521
	I1229 07:51:12.848053   38571 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 07:51:12.866345   38571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:51:12.878659   38571 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:51:12.878680   38571 kubeadm.go:158] found existing configuration files:
	
	I1229 07:51:12.878746   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:51:12.889702   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:51:12.889781   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:51:12.901605   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:51:12.912779   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:51:12.912879   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:51:12.925699   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:51:12.936701   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:51:12.936764   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:51:12.949135   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:51:12.960275   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:51:12.960339   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:51:12.972715   38571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:51:12.985305   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.111528   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.591093   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.836241   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.923699   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:14.041891   38571 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:51:14.041983   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:14.542156   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:15.042868   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:15.542333   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:16.043048   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:16.077971   38571 api_server.go:72] duration metric: took 2.036089879s to wait for apiserver process to appear ...
	I1229 07:51:16.078003   38571 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:51:16.078027   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:16.078655   38571 api_server.go:315] stopped: https://192.168.39.92:8443/healthz: Get "https://192.168.39.92:8443/healthz": dial tcp 192.168.39.92:8443: connect: connection refused
	I1229 07:51:16.578305   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:18.443388   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:51:18.443431   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:51:18.443450   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:18.484701   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:51:18.484737   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:51:18.579095   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:18.595577   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:51:18.595609   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:51:19.078258   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:19.087363   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:51:19.087390   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:51:19.579109   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:19.588877   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:51:19.588907   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:51:20.078209   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:20.091375   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:51:20.102932   38571 api_server.go:141] control plane version: v1.35.0
	I1229 07:51:20.102963   38571 api_server.go:131] duration metric: took 4.024953885s to wait for apiserver health ...
	I1229 07:51:20.102971   38571 cni.go:84] Creating CNI manager for ""
	I1229 07:51:20.102975   38571 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I1229 07:51:20.105045   38571 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:51:20.106509   38571 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:51:20.129102   38571 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:51:20.129126   38571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:51:20.173712   38571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:51:20.944213   38571 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:51:20.950659   38571 system_pods.go:59] 12 kube-system pods found
	I1229 07:51:20.950722   38571 system_pods.go:61] "coredns-7d764666f9-gqqbx" [dd603e72-7da4-4f75-8c97-de4593e77af5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:51:20.950737   38571 system_pods.go:61] "etcd-multinode-178114" [c6fc7d5a-3d83-4382-8c01-91e4f8d8ad33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:51:20.950746   38571 system_pods.go:61] "kindnet-4t5sq" [6712d04a-a02b-425c-8659-11dec3a7d281] Running
	I1229 07:51:20.950753   38571 system_pods.go:61] "kindnet-5tphv" [a9324ef8-9243-4845-84cf-f27ee8a74693] Running
	I1229 07:51:20.950762   38571 system_pods.go:61] "kindnet-gwvxq" [07f4fe8b-bce0-4fd5-93cf-2de2803c6c14] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:51:20.950771   38571 system_pods.go:61] "kube-apiserver-multinode-178114" [e17c2dfb-5fb7-42ef-a235-edc0a3bb2c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:51:20.950784   38571 system_pods.go:61] "kube-controller-manager-multinode-178114" [a0c429d8-6d0a-4057-b3f4-ceef822a9ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:51:20.950815   38571 system_pods.go:61] "kube-proxy-2b4vx" [ade09c16-533b-4a8b-a136-3692d76cbfc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:51:20.950827   38571 system_pods.go:61] "kube-proxy-cv887" [881437d9-95e9-49e4-84c0-821225e76269] Running
	I1229 07:51:20.950833   38571 system_pods.go:61] "kube-proxy-z489z" [0d8a34cb-9fbc-4cae-b846-2dd702009fc0] Running
	I1229 07:51:20.950843   38571 system_pods.go:61] "kube-scheduler-multinode-178114" [704630b8-21f0-432d-89eb-3b3308e951c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:51:20.950854   38571 system_pods.go:61] "storage-provisioner" [8bdbcf14-1a5b-4150-9adc-728e41f9e652] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:51:20.950865   38571 system_pods.go:74] duration metric: took 6.622235ms to wait for pod list to return data ...
	I1229 07:51:20.950878   38571 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:51:20.953813   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:20.953850   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:20.953868   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:20.953875   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:20.953884   38571 node_conditions.go:105] duration metric: took 2.996676ms to run NodePressure ...
	I1229 07:51:20.953965   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:21.363980   38571 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1229 07:51:21.370528   38571 kubeadm.go:744] kubelet initialised
	I1229 07:51:21.370555   38571 kubeadm.go:745] duration metric: took 6.544349ms waiting for restarted kubelet to initialise ...
	I1229 07:51:21.370576   38571 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:51:21.397453   38571 ops.go:34] apiserver oom_adj: -16
	I1229 07:51:21.397480   38571 kubeadm.go:602] duration metric: took 8.621695264s to restartPrimaryControlPlane
	I1229 07:51:21.397492   38571 kubeadm.go:403] duration metric: took 8.65523758s to StartCluster
	I1229 07:51:21.397514   38571 settings.go:142] acquiring lock: {Name:mk265ccb37cd5f502c9a7085e916ed505357de62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:21.397588   38571 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:51:21.398128   38571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9552/kubeconfig: {Name:mk3da1041c6a3b33ce0ec44cd869a994fb20af51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:21.398334   38571 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1229 07:51:21.398443   38571 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:51:21.398681   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:21.400069   38571 out.go:179] * Verifying Kubernetes components...
	I1229 07:51:21.400069   38571 out.go:179] * Enabled addons: 
	I1229 07:51:21.401193   38571 addons.go:530] duration metric: took 2.755662ms for enable addons: enabled=[]
	I1229 07:51:21.401252   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:21.673313   38571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:21.693823   38571 node_ready.go:35] waiting up to 6m0s for node "multinode-178114" to be "Ready" ...
	W1229 07:51:23.698318   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	W1229 07:51:26.197364   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	W1229 07:51:28.198652   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	W1229 07:51:30.199125   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	I1229 07:51:31.697220   38571 node_ready.go:49] node "multinode-178114" is "Ready"
	I1229 07:51:31.697265   38571 node_ready.go:38] duration metric: took 10.003370599s for node "multinode-178114" to be "Ready" ...
	I1229 07:51:31.697287   38571 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:51:31.697351   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:31.718089   38571 api_server.go:72] duration metric: took 10.319712382s to wait for apiserver process to appear ...
	I1229 07:51:31.718118   38571 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:51:31.718140   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:31.724039   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:51:31.725170   38571 api_server.go:141] control plane version: v1.35.0
	I1229 07:51:31.725201   38571 api_server.go:131] duration metric: took 7.075007ms to wait for apiserver health ...
	I1229 07:51:31.725214   38571 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:51:31.729667   38571 system_pods.go:59] 12 kube-system pods found
	I1229 07:51:31.729704   38571 system_pods.go:61] "coredns-7d764666f9-gqqbx" [dd603e72-7da4-4f75-8c97-de4593e77af5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:51:31.729713   38571 system_pods.go:61] "etcd-multinode-178114" [c6fc7d5a-3d83-4382-8c01-91e4f8d8ad33] Running
	I1229 07:51:31.729721   38571 system_pods.go:61] "kindnet-4t5sq" [6712d04a-a02b-425c-8659-11dec3a7d281] Running
	I1229 07:51:31.729763   38571 system_pods.go:61] "kindnet-5tphv" [a9324ef8-9243-4845-84cf-f27ee8a74693] Running
	I1229 07:51:31.729771   38571 system_pods.go:61] "kindnet-gwvxq" [07f4fe8b-bce0-4fd5-93cf-2de2803c6c14] Running
	I1229 07:51:31.729784   38571 system_pods.go:61] "kube-apiserver-multinode-178114" [e17c2dfb-5fb7-42ef-a235-edc0a3bb2c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:51:31.729813   38571 system_pods.go:61] "kube-controller-manager-multinode-178114" [a0c429d8-6d0a-4057-b3f4-ceef822a9ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:51:31.729825   38571 system_pods.go:61] "kube-proxy-2b4vx" [ade09c16-533b-4a8b-a136-3692d76cbfc9] Running
	I1229 07:51:31.729832   38571 system_pods.go:61] "kube-proxy-cv887" [881437d9-95e9-49e4-84c0-821225e76269] Running
	I1229 07:51:31.729838   38571 system_pods.go:61] "kube-proxy-z489z" [0d8a34cb-9fbc-4cae-b846-2dd702009fc0] Running
	I1229 07:51:31.729844   38571 system_pods.go:61] "kube-scheduler-multinode-178114" [704630b8-21f0-432d-89eb-3b3308e951c7] Running
	I1229 07:51:31.729853   38571 system_pods.go:61] "storage-provisioner" [8bdbcf14-1a5b-4150-9adc-728e41f9e652] Running
	I1229 07:51:31.729862   38571 system_pods.go:74] duration metric: took 4.639441ms to wait for pod list to return data ...
	I1229 07:51:31.729876   38571 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:51:31.733788   38571 default_sa.go:45] found service account: "default"
	I1229 07:51:31.733832   38571 default_sa.go:55] duration metric: took 3.949029ms for default service account to be created ...
	I1229 07:51:31.733846   38571 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:51:31.737599   38571 system_pods.go:86] 12 kube-system pods found
	I1229 07:51:31.737636   38571 system_pods.go:89] "coredns-7d764666f9-gqqbx" [dd603e72-7da4-4f75-8c97-de4593e77af5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:51:31.737645   38571 system_pods.go:89] "etcd-multinode-178114" [c6fc7d5a-3d83-4382-8c01-91e4f8d8ad33] Running
	I1229 07:51:31.737653   38571 system_pods.go:89] "kindnet-4t5sq" [6712d04a-a02b-425c-8659-11dec3a7d281] Running
	I1229 07:51:31.737657   38571 system_pods.go:89] "kindnet-5tphv" [a9324ef8-9243-4845-84cf-f27ee8a74693] Running
	I1229 07:51:31.737662   38571 system_pods.go:89] "kindnet-gwvxq" [07f4fe8b-bce0-4fd5-93cf-2de2803c6c14] Running
	I1229 07:51:31.737671   38571 system_pods.go:89] "kube-apiserver-multinode-178114" [e17c2dfb-5fb7-42ef-a235-edc0a3bb2c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:51:31.737681   38571 system_pods.go:89] "kube-controller-manager-multinode-178114" [a0c429d8-6d0a-4057-b3f4-ceef822a9ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:51:31.737691   38571 system_pods.go:89] "kube-proxy-2b4vx" [ade09c16-533b-4a8b-a136-3692d76cbfc9] Running
	I1229 07:51:31.737697   38571 system_pods.go:89] "kube-proxy-cv887" [881437d9-95e9-49e4-84c0-821225e76269] Running
	I1229 07:51:31.737702   38571 system_pods.go:89] "kube-proxy-z489z" [0d8a34cb-9fbc-4cae-b846-2dd702009fc0] Running
	I1229 07:51:31.737711   38571 system_pods.go:89] "kube-scheduler-multinode-178114" [704630b8-21f0-432d-89eb-3b3308e951c7] Running
	I1229 07:51:31.737716   38571 system_pods.go:89] "storage-provisioner" [8bdbcf14-1a5b-4150-9adc-728e41f9e652] Running
	I1229 07:51:31.737727   38571 system_pods.go:126] duration metric: took 3.873327ms to wait for k8s-apps to be running ...
	I1229 07:51:31.737746   38571 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:51:31.737820   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:51:31.755088   38571 system_svc.go:56] duration metric: took 17.333941ms WaitForService to wait for kubelet
	I1229 07:51:31.755119   38571 kubeadm.go:587] duration metric: took 10.356757748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:51:31.755140   38571 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:51:31.758227   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:31.758250   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:31.758264   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:31.758269   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:31.758275   38571 node_conditions.go:105] duration metric: took 3.129916ms to run NodePressure ...
	I1229 07:51:31.758288   38571 start.go:242] waiting for startup goroutines ...
	I1229 07:51:31.758299   38571 start.go:247] waiting for cluster config update ...
	I1229 07:51:31.758311   38571 start.go:256] writing updated cluster config ...
	I1229 07:51:31.760550   38571 out.go:203] 
	I1229 07:51:31.762212   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:31.762323   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:31.764028   38571 out.go:179] * Starting "multinode-178114-m02" worker node in "multinode-178114" cluster
	I1229 07:51:31.765543   38571 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:51:31.765568   38571 cache.go:65] Caching tarball of preloaded images
	I1229 07:51:31.765696   38571 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:51:31.765714   38571 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:51:31.765854   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:31.766093   38571 start.go:360] acquireMachinesLock for multinode-178114-m02: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:51:31.766158   38571 start.go:364] duration metric: took 41.249µs to acquireMachinesLock for "multinode-178114-m02"
	I1229 07:51:31.766180   38571 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:51:31.766188   38571 fix.go:54] fixHost starting: m02
	I1229 07:51:31.767951   38571 fix.go:112] recreateIfNeeded on multinode-178114-m02: state=Stopped err=<nil>
	W1229 07:51:31.767987   38571 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:51:31.769865   38571 out.go:252] * Restarting existing kvm2 VM for "multinode-178114-m02" ...
	I1229 07:51:31.769919   38571 main.go:144] libmachine: starting domain...
	I1229 07:51:31.769930   38571 main.go:144] libmachine: ensuring networks are active...
	I1229 07:51:31.771017   38571 main.go:144] libmachine: Ensuring network default is active
	I1229 07:51:31.771435   38571 main.go:144] libmachine: Ensuring network mk-multinode-178114 is active
	I1229 07:51:31.771905   38571 main.go:144] libmachine: getting domain XML...
	I1229 07:51:31.773138   38571 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-178114-m02</name>
	  <uuid>235b672c-4d78-42ab-b738-854b9695842e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/multinode-178114-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a6:c7:ce'/>
	      <source network='mk-multinode-178114'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7c:63:d5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1229 07:51:33.090100   38571 main.go:144] libmachine: waiting for domain to start...
	I1229 07:51:33.091989   38571 main.go:144] libmachine: domain is now running
	I1229 07:51:33.092014   38571 main.go:144] libmachine: waiting for IP...
	I1229 07:51:33.092777   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.093388   38571 main.go:144] libmachine: domain multinode-178114-m02 has current primary IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.093405   38571 main.go:144] libmachine: found domain IP: 192.168.39.61
	I1229 07:51:33.093413   38571 main.go:144] libmachine: reserving static IP address...
	I1229 07:51:33.093903   38571 main.go:144] libmachine: found host DHCP lease matching {name: "multinode-178114-m02", mac: "52:54:00:a6:c7:ce", ip: "192.168.39.61"} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:49:14 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:33.093935   38571 main.go:144] libmachine: skip adding static IP to network mk-multinode-178114 - found existing host DHCP lease matching {name: "multinode-178114-m02", mac: "52:54:00:a6:c7:ce", ip: "192.168.39.61"}
	I1229 07:51:33.093949   38571 main.go:144] libmachine: reserved static IP address 192.168.39.61 for domain multinode-178114-m02
	I1229 07:51:33.093959   38571 main.go:144] libmachine: waiting for SSH...
	I1229 07:51:33.093966   38571 main.go:144] libmachine: Getting to WaitForSSH function...
	I1229 07:51:33.096568   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.097104   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:49:14 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:33.097139   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.097392   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:33.097579   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:33.097588   38571 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1229 07:51:36.199067   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.61:22: connect: no route to host
	I1229 07:51:42.279096   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.61:22: connect: no route to host
	I1229 07:51:45.381551   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:45.385232   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.385704   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.385730   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.386098   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:45.386314   38571 machine.go:94] provisionDockerMachine start ...
	I1229 07:51:45.388606   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.389148   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.389172   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.389342   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:45.389521   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:45.389531   38571 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:51:45.490500   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1229 07:51:45.490531   38571 buildroot.go:166] provisioning hostname "multinode-178114-m02"
	I1229 07:51:45.493053   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.493445   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.493466   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.493619   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:45.493862   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:45.493877   38571 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-178114-m02 && echo "multinode-178114-m02" | sudo tee /etc/hostname
	I1229 07:51:45.614273   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-178114-m02
	
	I1229 07:51:45.617481   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.618016   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.618062   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.618279   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:45.618509   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:45.618527   38571 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-178114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-178114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-178114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:51:45.734666   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:45.734698   38571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:51:45.734721   38571 buildroot.go:174] setting up certificates
	I1229 07:51:45.734732   38571 provision.go:84] configureAuth start
	I1229 07:51:45.737442   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.737828   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.737851   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.740417   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.740777   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.740825   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.740969   38571 provision.go:143] copyHostCerts
	I1229 07:51:45.740996   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:45.741030   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:51:45.741048   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:45.741134   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:51:45.741231   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:45.741257   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:51:45.741268   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:45.741309   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:51:45.741369   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:45.741406   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:51:45.741416   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:45.741452   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:51:45.741515   38571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.multinode-178114-m02 san=[127.0.0.1 192.168.39.61 localhost minikube multinode-178114-m02]
	I1229 07:51:45.939677   38571 provision.go:177] copyRemoteCerts
	I1229 07:51:45.939763   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:51:45.942600   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.943036   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.943071   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.943207   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:46.026486   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:51:46.026556   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1229 07:51:46.058604   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:51:46.058680   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:51:46.091269   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:51:46.091350   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:51:46.123837   38571 provision.go:87] duration metric: took 389.088504ms to configureAuth
	I1229 07:51:46.123867   38571 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:51:46.124137   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:46.126902   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.127295   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:46.127318   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.127516   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.127761   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:46.127772   38571 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:51:46.229914   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:51:46.229949   38571 buildroot.go:70] root file system type: tmpfs
	I1229 07:51:46.230152   38571 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:51:46.233863   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.234532   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:46.234560   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.234729   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.234970   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:46.235024   38571 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.39.92"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:51:46.371964   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.39.92
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:51:46.375244   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.375719   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:46.375740   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.375936   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.376190   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:46.376219   38571 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:51:47.365467   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1229 07:51:47.365501   38571 machine.go:97] duration metric: took 1.979172812s to provisionDockerMachine
	I1229 07:51:47.365525   38571 start.go:293] postStartSetup for "multinode-178114-m02" (driver="kvm2")
	I1229 07:51:47.365544   38571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:51:47.365614   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:51:47.369102   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.369609   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.369658   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.369858   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:47.454628   38571 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:51:47.460096   38571 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:51:47.460129   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:51:47.460214   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:51:47.460300   38571 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:51:47.460314   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 07:51:47.460446   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:51:47.474150   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:47.504508   38571 start.go:296] duration metric: took 138.968113ms for postStartSetup
	I1229 07:51:47.504548   38571 fix.go:56] duration metric: took 15.73836213s for fixHost
	I1229 07:51:47.507872   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.508342   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.508376   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.508578   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:47.508890   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:47.508907   38571 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:51:47.612534   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766994707.579723900
	
	I1229 07:51:47.612568   38571 fix.go:216] guest clock: 1766994707.579723900
	I1229 07:51:47.612580   38571 fix.go:229] Guest: 2025-12-29 07:51:47.5797239 +0000 UTC Remote: 2025-12-29 07:51:47.504552719 +0000 UTC m=+54.455724724 (delta=75.171181ms)
	I1229 07:51:47.612607   38571 fix.go:200] guest clock delta is within tolerance: 75.171181ms
	I1229 07:51:47.612613   38571 start.go:83] releasing machines lock for "multinode-178114-m02", held for 15.846442111s
	I1229 07:51:47.615681   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.616222   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.616256   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.618813   38571 out.go:179] * Found network options:
	I1229 07:51:47.620409   38571 out.go:179]   - NO_PROXY=192.168.39.92
	W1229 07:51:47.621860   38571 proxy.go:120] fail to check proxy env: Error ip not in block
	W1229 07:51:47.622320   38571 proxy.go:120] fail to check proxy env: Error ip not in block
	I1229 07:51:47.622422   38571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 07:51:47.622467   38571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:51:47.625959   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626049   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626524   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.626553   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626568   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.626601   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626763   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:47.626768   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	W1229 07:51:47.726024   38571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:51:47.726098   38571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:51:47.746914   38571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:51:47.746951   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:47.746985   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:47.747112   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:47.769910   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:51:47.783288   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:51:47.796442   38571 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:51:47.796524   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:51:47.809923   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:47.822747   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:51:47.835679   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:47.849902   38571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:51:47.863549   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:51:47.877168   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:51:47.891472   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:51:47.904645   38571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:51:47.916418   38571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1229 07:51:47.916488   38571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1229 07:51:47.930238   38571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:51:47.942772   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:48.101330   38571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:51:48.143872   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:48.143913   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:48.143982   38571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:51:48.164641   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:48.186731   38571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:51:48.212579   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:48.229812   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:48.247059   38571 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:51:48.277023   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:48.299961   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:48.333362   38571 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:51:48.338889   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:51:48.356729   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:51:48.379319   38571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:51:48.548034   38571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:51:48.709643   38571 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:51:48.709689   38571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:51:48.733488   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:51:48.750952   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:48.912548   38571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:51:49.511196   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:51:49.528962   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:51:49.545643   38571 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:51:49.564777   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:49.582256   38571 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:51:49.727380   38571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:51:49.886923   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:50.040853   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:51:50.086598   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:51:50.104551   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:50.255421   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:51:50.377138   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:50.406637   38571 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:51:50.406717   38571 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:51:50.416041   38571 start.go:574] Will wait 60s for crictl version
	I1229 07:51:50.416128   38571 ssh_runner.go:195] Run: which crictl
	I1229 07:51:50.420871   38571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:51:50.463224   38571 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:51:50.463305   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:50.494374   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:50.524155   38571 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:51:50.525817   38571 out.go:179]   - env NO_PROXY=192.168.39.92
	I1229 07:51:50.529771   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:50.530188   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:50.530210   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:50.530394   38571 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:51:50.535249   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:50.551107   38571 mustload.go:66] Loading cluster: multinode-178114
	I1229 07:51:50.551401   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:50.552904   38571 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:51:50.553137   38571 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114 for IP: 192.168.39.61
	I1229 07:51:50.553149   38571 certs.go:195] generating shared ca certs ...
	I1229 07:51:50.553162   38571 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:50.553279   38571 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:51:50.553329   38571 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:51:50.553354   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:51:50.553378   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:51:50.553400   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:51:50.553415   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:51:50.553474   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:51:50.553508   38571 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:51:50.553516   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:51:50.553541   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:51:50.553564   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:51:50.553588   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:51:50.553627   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:50.553656   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.553669   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.553681   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.553703   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:51:50.586711   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:51:50.620415   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:51:50.653838   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:51:50.688575   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:51:50.720377   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:51:50.751255   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:51:50.782045   38571 ssh_runner.go:195] Run: openssl version
	I1229 07:51:50.791331   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.804789   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:51:50.818071   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.824552   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.824642   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.833293   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:50.847330   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/134862.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:50.860707   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.874005   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:51:50.888500   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.894902   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.894976   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.902998   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:51:50.915886   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:51:50.929184   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.942117   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:51:50.956133   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.961892   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.961968   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.969851   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:51:50.982872   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13486.pem /etc/ssl/certs/51391683.0
	I1229 07:51:50.998064   38571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:51:51.003603   38571 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:51:51.003656   38571 kubeadm.go:935] updating node {m02 192.168.39.61 8443 v1.35.0 docker false true} ...
	I1229 07:51:51.003837   38571 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-178114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:51:51.003929   38571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:51:51.019065   38571 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:51:51.019176   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1229 07:51:51.032048   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1229 07:51:51.055537   38571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:51:51.079105   38571 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1229 07:51:51.084260   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:51.100209   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:51.248104   38571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:51.292715   38571 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:51:51.293054   38571 start.go:318] joinCluster: &{Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:51:51.293151   38571 start.go:331] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1229 07:51:51.293168   38571 host.go:66] Checking if "multinode-178114-m02" exists ...
	I1229 07:51:51.293340   38571 mustload.go:66] Loading cluster: multinode-178114
	I1229 07:51:51.293485   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:51.295575   38571 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:51:51.295931   38571 api_server.go:166] Checking apiserver status ...
	I1229 07:51:51.295996   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:51.299073   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:51.299607   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:51.299652   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:51.299856   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:51.400779   38571 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2188/cgroup
	I1229 07:51:51.413982   38571 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2188/cgroup
	I1229 07:51:51.426506   38571 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38e8599b7482ef471a759bacef66e1e4.slice/docker-6ac14357a37d6809d0e08e3dae47a1557c5265469c560bda9efb32d6b0ddc97d.scope/cgroup.freeze
	I1229 07:51:51.440271   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:51.445141   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:51:51.445214   38571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl drain multinode-178114-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I1229 07:51:54.593019   38571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl drain multinode-178114-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.147758209s)
	I1229 07:51:54.593050   38571 node.go:129] successfully drained node "multinode-178114-m02"
	I1229 07:51:54.593120   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I1229 07:51:54.596278   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:54.596851   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:54.596883   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:54.597048   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:54.926513   38571 node.go:156] successfully reset node "multinode-178114-m02"
	I1229 07:51:54.927076   38571 kapi.go:59] client config for multinode-178114: &rest.Config{Host:"https://192.168.39.92:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:51:54.934420   38571 node.go:181] successfully deleted node "multinode-178114-m02"
	I1229 07:51:54.934444   38571 start.go:335] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1229 07:51:54.934517   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1229 07:51:54.937993   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:54.938552   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:54.938591   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:54.938808   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:55.089264   38571 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1229 07:51:55.089362   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nsk4hi.3w75j2rtid1l0ik3 --discovery-token-ca-cert-hash sha256:47f6a2f9bf9c65c35fcfecaaac32e7befdc059ab272131834f3128167a677a66 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-178114-m02"
	I1229 07:51:56.060370   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1229 07:51:56.406702   38571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-178114-m02 minikube.k8s.io/updated_at=2025_12_29T07_51_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=multinode-178114 minikube.k8s.io/primary=false
	I1229 07:51:56.488264   38571 start.go:320] duration metric: took 5.195213102s to joinCluster
	I1229 07:51:56.490583   38571 out.go:203] 
	W1229 07:51:56.492051   38571 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-178114-m02 minikube.k8s.io/updated_at=2025_12_29T07_51_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=multinode-178114 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-178114-m02" not found
	
	X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-178114-m02 minikube.k8s.io/updated_at=2025_12_29T07_51_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=multinode-178114 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-178114-m02" not found
	
	W1229 07:51:56.492069   38571 out.go:285] * 
	* 
	W1229 07:51:56.492328   38571 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:51:56.493643   38571 out.go:203] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-178114 --wait=true -v=5 --alsologtostderr --driver=kvm2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-178114 -n multinode-178114
helpers_test.go:253: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p multinode-178114 logs -n 25: (1.069104256s)
helpers_test.go:261: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-178114 cp multinode-178114-m02:/home/docker/cp-test.txt multinode-178114:/home/docker/cp-test_multinode-178114-m02_multinode-178114.txt         │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m02 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114 sudo cat /home/docker/cp-test_multinode-178114-m02_multinode-178114.txt                                          │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ cp      │ multinode-178114 cp multinode-178114-m02:/home/docker/cp-test.txt multinode-178114-m03:/home/docker/cp-test_multinode-178114-m02_multinode-178114-m03.txt │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m02 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m03 sudo cat /home/docker/cp-test_multinode-178114-m02_multinode-178114-m03.txt                                  │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ cp      │ multinode-178114 cp testdata/cp-test.txt multinode-178114-m03:/home/docker/cp-test.txt                                                                    │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ cp      │ multinode-178114 cp multinode-178114-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1125405884/001/cp-test_multinode-178114-m03.txt         │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ cp      │ multinode-178114 cp multinode-178114-m03:/home/docker/cp-test.txt multinode-178114:/home/docker/cp-test_multinode-178114-m03_multinode-178114.txt         │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114 sudo cat /home/docker/cp-test_multinode-178114-m03_multinode-178114.txt                                          │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ cp      │ multinode-178114 cp multinode-178114-m03:/home/docker/cp-test.txt multinode-178114-m02:/home/docker/cp-test_multinode-178114-m03_multinode-178114-m02.txt │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ ssh     │ multinode-178114 ssh -n multinode-178114-m02 sudo cat /home/docker/cp-test_multinode-178114-m03_multinode-178114-m02.txt                                  │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ node    │ multinode-178114 node stop m03                                                                                                                            │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ node    │ multinode-178114 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:47 UTC │
	│ node    │ list -p multinode-178114                                                                                                                                  │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │                     │
	│ stop    │ -p multinode-178114                                                                                                                                       │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:47 UTC │ 29 Dec 25 07:48 UTC │
	│ start   │ -p multinode-178114 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:48 UTC │ 29 Dec 25 07:50 UTC │
	│ node    │ list -p multinode-178114                                                                                                                                  │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:50 UTC │                     │
	│ node    │ multinode-178114 node delete m03                                                                                                                          │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:50 UTC │ 29 Dec 25 07:50 UTC │
	│ stop    │ multinode-178114 stop                                                                                                                                     │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:50 UTC │ 29 Dec 25 07:50 UTC │
	│ start   │ -p multinode-178114 --wait=true -v=5 --alsologtostderr --driver=kvm2                                                                                      │ multinode-178114 │ jenkins │ v1.37.0 │ 29 Dec 25 07:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:50:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:50:53.098266   38571 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:50:53.098361   38571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:50:53.098368   38571 out.go:374] Setting ErrFile to fd 2...
	I1229 07:50:53.098374   38571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:50:53.098604   38571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:50:53.099117   38571 out.go:368] Setting JSON to false
	I1229 07:50:53.100056   38571 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5603,"bootTime":1766989050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:50:53.100114   38571 start.go:143] virtualization: kvm guest
	I1229 07:50:53.102525   38571 out.go:179] * [multinode-178114] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:50:53.104522   38571 notify.go:221] Checking for updates...
	I1229 07:50:53.104576   38571 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:50:53.106200   38571 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:50:53.107500   38571 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:50:53.108759   38571 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:50:53.109861   38571 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:50:53.111279   38571 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:50:53.113096   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:50:53.113667   38571 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:50:53.150223   38571 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:50:53.151479   38571 start.go:309] selected driver: kvm2
	I1229 07:50:53.151499   38571 start.go:928] validating driver "kvm2" against &{Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:50:53.151664   38571 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:50:53.152644   38571 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:50:53.152681   38571 cni.go:84] Creating CNI manager for ""
	I1229 07:50:53.152750   38571 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I1229 07:50:53.152827   38571 start.go:353] cluster config:
	{Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:
false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:50:53.152959   38571 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:50:53.155573   38571 out.go:179] * Starting "multinode-178114" primary control-plane node in "multinode-178114" cluster
	I1229 07:50:53.156973   38571 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:50:53.157034   38571 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1229 07:50:53.157046   38571 cache.go:65] Caching tarball of preloaded images
	I1229 07:50:53.157192   38571 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:50:53.157208   38571 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:50:53.157372   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:50:53.157666   38571 start.go:360] acquireMachinesLock for multinode-178114: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:50:53.157723   38571 start.go:364] duration metric: took 32.734µs to acquireMachinesLock for "multinode-178114"
	I1229 07:50:53.157738   38571 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:50:53.157757   38571 fix.go:54] fixHost starting: 
	I1229 07:50:53.159786   38571 fix.go:112] recreateIfNeeded on multinode-178114: state=Stopped err=<nil>
	W1229 07:50:53.159832   38571 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:50:53.161485   38571 out.go:252] * Restarting existing kvm2 VM for "multinode-178114" ...
	I1229 07:50:53.161556   38571 main.go:144] libmachine: starting domain...
	I1229 07:50:53.161571   38571 main.go:144] libmachine: ensuring networks are active...
	I1229 07:50:53.162378   38571 main.go:144] libmachine: Ensuring network default is active
	I1229 07:50:53.162880   38571 main.go:144] libmachine: Ensuring network mk-multinode-178114 is active
	I1229 07:50:53.163394   38571 main.go:144] libmachine: getting domain XML...
	I1229 07:50:53.164599   38571 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-178114</name>
	  <uuid>ae972118-d57b-4c37-b972-fae087082f1e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/multinode-178114.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:52:d2:7c'/>
	      <source network='mk-multinode-178114'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:35:33:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1229 07:50:54.473390   38571 main.go:144] libmachine: waiting for domain to start...
	I1229 07:50:54.474941   38571 main.go:144] libmachine: domain is now running
	I1229 07:50:54.474966   38571 main.go:144] libmachine: waiting for IP...
	I1229 07:50:54.475914   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.476623   38571 main.go:144] libmachine: domain multinode-178114 has current primary IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.476642   38571 main.go:144] libmachine: found domain IP: 192.168.39.92
	I1229 07:50:54.476651   38571 main.go:144] libmachine: reserving static IP address...
	I1229 07:50:54.477234   38571 main.go:144] libmachine: found host DHCP lease matching {name: "multinode-178114", mac: "52:54:00:52:d2:7c", ip: "192.168.39.92"} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:48:35 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:50:54.477272   38571 main.go:144] libmachine: skip adding static IP to network mk-multinode-178114 - found existing host DHCP lease matching {name: "multinode-178114", mac: "52:54:00:52:d2:7c", ip: "192.168.39.92"}
	I1229 07:50:54.477285   38571 main.go:144] libmachine: reserved static IP address 192.168.39.92 for domain multinode-178114
	I1229 07:50:54.477298   38571 main.go:144] libmachine: waiting for SSH...
	I1229 07:50:54.477306   38571 main.go:144] libmachine: Getting to WaitForSSH function...
	I1229 07:50:54.480161   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.480824   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:48:35 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:50:54.480862   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:50:54.481093   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:50:54.481393   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:50:54.481407   38571 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1229 07:50:57.543142   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I1229 07:51:03.623087   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I1229 07:51:06.741989   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:06.745461   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.746106   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:06.746136   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.746466   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:06.746701   38571 machine.go:94] provisionDockerMachine start ...
	I1229 07:51:06.749276   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.749840   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:06.749884   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.750065   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:06.750258   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:06.750268   38571 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:51:06.858747   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1229 07:51:06.858781   38571 buildroot.go:166] provisioning hostname "multinode-178114"
	I1229 07:51:06.861684   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.862261   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:06.862287   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:06.862499   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:06.862693   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:06.862704   38571 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-178114 && echo "multinode-178114" | sudo tee /etc/hostname
	I1229 07:51:06.996666   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-178114
	
	I1229 07:51:06.999917   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.000487   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.000522   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.000743   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.000984   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.001007   38571 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-178114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-178114/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-178114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:51:07.125866   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:07.125903   38571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:51:07.125921   38571 buildroot.go:174] setting up certificates
	I1229 07:51:07.125959   38571 provision.go:84] configureAuth start
	I1229 07:51:07.128359   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.128734   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.128753   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.130742   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.131153   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.131173   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.131287   38571 provision.go:143] copyHostCerts
	I1229 07:51:07.131310   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:07.131339   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:51:07.131348   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:07.131416   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:51:07.131501   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:07.131526   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:51:07.131535   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:07.131562   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:51:07.131655   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:07.131675   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:51:07.131678   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:07.131703   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:51:07.131755   38571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.multinode-178114 san=[127.0.0.1 192.168.39.92 localhost minikube multinode-178114]
	I1229 07:51:07.153058   38571 provision.go:177] copyRemoteCerts
	I1229 07:51:07.153129   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:51:07.155873   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.156202   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.156226   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.156339   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:07.242577   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:51:07.242645   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1229 07:51:07.278317   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:51:07.278409   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:51:07.313100   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:51:07.313185   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:51:07.343069   38571 provision.go:87] duration metric: took 217.084373ms to configureAuth
	I1229 07:51:07.343103   38571 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:51:07.343308   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:07.345919   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.346237   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.346264   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.346399   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.346580   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.346590   38571 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:51:07.455352   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:51:07.455382   38571 buildroot.go:70] root file system type: tmpfs
	I1229 07:51:07.455540   38571 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:51:07.459073   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.459582   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.459610   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.459913   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.460123   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.460168   38571 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:51:07.585693   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:51:07.588880   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.589346   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:07.589372   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:07.589559   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:07.589742   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:07.589757   38571 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:51:08.671158   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1229 07:51:08.671184   38571 machine.go:97] duration metric: took 1.924469757s to provisionDockerMachine
	I1229 07:51:08.671206   38571 start.go:293] postStartSetup for "multinode-178114" (driver="kvm2")
	I1229 07:51:08.671217   38571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:51:08.671295   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:51:08.674391   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.674948   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.674985   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.675233   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:08.761896   38571 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:51:08.766628   38571 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:51:08.766662   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:51:08.766749   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:51:08.766881   38571 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:51:08.766896   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 07:51:08.767028   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:51:08.780296   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:08.811290   38571 start.go:296] duration metric: took 140.065036ms for postStartSetup
	I1229 07:51:08.811351   38571 fix.go:56] duration metric: took 15.653592801s for fixHost
	I1229 07:51:08.814267   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.814627   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.814651   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.814823   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:08.815124   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1229 07:51:08.815147   38571 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:51:08.924579   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766994668.888468870
	
	I1229 07:51:08.924604   38571 fix.go:216] guest clock: 1766994668.888468870
	I1229 07:51:08.924614   38571 fix.go:229] Guest: 2025-12-29 07:51:08.88846887 +0000 UTC Remote: 2025-12-29 07:51:08.811358033 +0000 UTC m=+15.762530033 (delta=77.110837ms)
	I1229 07:51:08.924636   38571 fix.go:200] guest clock delta is within tolerance: 77.110837ms
	I1229 07:51:08.924642   38571 start.go:83] releasing machines lock for "multinode-178114", held for 15.766910696s
	I1229 07:51:08.928536   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.929437   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.929475   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.930223   38571 ssh_runner.go:195] Run: cat /version.json
	I1229 07:51:08.930433   38571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:51:08.933593   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.933965   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.934165   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.934198   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.934379   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:08.934392   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:08.934442   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:08.934590   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:09.013986   38571 ssh_runner.go:195] Run: systemctl --version
	I1229 07:51:09.038743   38571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:51:09.045615   38571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:51:09.045689   38571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:51:09.065815   38571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:51:09.065840   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:09.065865   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:09.065958   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:09.089198   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:51:09.101721   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:51:09.114294   38571 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:51:09.114369   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:51:09.127454   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:09.140148   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:51:09.152891   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:09.166865   38571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:51:09.179867   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:51:09.192602   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:51:09.206123   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:51:09.219317   38571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:51:09.230222   38571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1229 07:51:09.230307   38571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1229 07:51:09.243761   38571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:51:09.255529   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:09.398450   38571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:51:09.437402   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:09.437451   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:09.437500   38571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:51:09.456156   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:09.474931   38571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:51:09.501676   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:09.520621   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:09.537483   38571 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:51:09.574468   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:09.590669   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:09.614033   38571 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:51:09.618443   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:51:09.630500   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:51:09.652449   38571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:51:09.801472   38571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:51:09.978337   38571 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:51:09.978456   38571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:51:10.001127   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:51:10.017264   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:10.162882   38571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:51:10.726562   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:51:10.742501   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:51:10.757500   38571 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:51:10.774644   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:10.790824   38571 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:51:10.934117   38571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:51:11.078908   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:11.224636   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:51:11.265331   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:51:11.282148   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:11.432086   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:51:11.553354   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:11.573972   38571 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:51:11.574048   38571 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:51:11.581008   38571 start.go:574] Will wait 60s for crictl version
	I1229 07:51:11.581096   38571 ssh_runner.go:195] Run: which crictl
	I1229 07:51:11.585569   38571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:51:11.621890   38571 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:51:11.621985   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:11.651921   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:11.679435   38571 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:51:11.682297   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:11.682717   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:11.682744   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:11.682947   38571 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:51:11.688071   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:11.703105   38571 kubeadm.go:884] updating cluster {Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fal
se kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:51:11.703275   38571 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:51:11.703323   38571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:51:11.723930   38571 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1229 07:51:11.723957   38571 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:51:11.724028   38571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:51:11.743395   38571 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1229 07:51:11.743420   38571 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:51:11.743430   38571 kubeadm.go:935] updating node { 192.168.39.92 8443 v1.35.0 docker true true} ...
	I1229 07:51:11.743533   38571 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-178114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:51:11.743587   38571 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:51:11.796519   38571 cni.go:84] Creating CNI manager for ""
	I1229 07:51:11.796545   38571 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I1229 07:51:11.796557   38571 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:51:11.796588   38571 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-178114 NodeName:multinode-178114 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:51:11.796758   38571 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-178114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:51:11.796859   38571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:51:11.809410   38571 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:51:11.809497   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:51:11.821673   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1229 07:51:11.843346   38571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:51:11.864417   38571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1229 07:51:11.886464   38571 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1229 07:51:11.890871   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:11.906877   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:12.052311   38571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:12.094371   38571 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114 for IP: 192.168.39.92
	I1229 07:51:12.094403   38571 certs.go:195] generating shared ca certs ...
	I1229 07:51:12.094425   38571 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:12.094630   38571 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:51:12.094734   38571 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:51:12.094767   38571 certs.go:257] generating profile certs ...
	I1229 07:51:12.094930   38571 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.key
	I1229 07:51:12.095005   38571 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.key.0309c206
	I1229 07:51:12.095075   38571 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.key
	I1229 07:51:12.095092   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:51:12.095114   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:51:12.095131   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:51:12.095149   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:51:12.095166   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:51:12.095186   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:51:12.095205   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:51:12.095230   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:51:12.095305   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:51:12.095350   38571 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:51:12.095364   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:51:12.095410   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:51:12.095449   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:51:12.095481   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:51:12.095536   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:12.095592   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.095615   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.095636   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.096355   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:51:12.132034   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:51:12.177430   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:51:12.216360   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:51:12.258275   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:51:12.291093   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:51:12.321598   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:51:12.352706   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:51:12.384224   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:51:12.414782   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:51:12.444889   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:51:12.475101   38571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:51:12.496952   38571 ssh_runner.go:195] Run: openssl version
	I1229 07:51:12.504099   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.515723   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:51:12.527980   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.533592   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.533668   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:12.541533   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:51:12.553380   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:51:12.565726   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.577658   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:51:12.589832   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.595615   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.595681   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:51:12.603515   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:51:12.615589   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13486.pem /etc/ssl/certs/51391683.0
	I1229 07:51:12.627899   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.639904   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:51:12.652275   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.657954   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.658062   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:51:12.665699   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:12.677962   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/134862.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:12.689811   38571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:51:12.695401   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:51:12.703343   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:51:12.711012   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:51:12.718870   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:51:12.726935   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:51:12.734500   38571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:51:12.742262   38571 kubeadm.go:401] StartCluster: {Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.
0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false
kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:51:12.742418   38571 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:51:12.763028   38571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:51:12.775756   38571 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:51:12.775778   38571 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:51:12.775842   38571 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:51:12.788013   38571 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:51:12.788448   38571 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-178114" does not appear in /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:51:12.788557   38571 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9552/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-178114" cluster setting kubeconfig missing "multinode-178114" context setting]
	I1229 07:51:12.788782   38571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9552/kubeconfig: {Name:mk3da1041c6a3b33ce0ec44cd869a994fb20af51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:12.789261   38571 kapi.go:59] client config for multinode-178114: &rest.Config{Host:"https://192.168.39.92:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:51:12.789676   38571 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:51:12.789692   38571 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:51:12.789696   38571 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:51:12.789701   38571 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:51:12.789704   38571 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:51:12.789711   38571 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:51:12.789828   38571 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1229 07:51:12.790025   38571 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:51:12.801675   38571 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.92
	I1229 07:51:12.801713   38571 kubeadm.go:1161] stopping kube-system containers ...
	I1229 07:51:12.801774   38571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:51:12.824583   38571 docker.go:487] Stopping containers: [fa2d4896312f 10cc8e4caf70 a4a4bb711fae 3892c6c0af78 3851ba81f40b d644dd7bd97e b96c765a8200 88eb1127f7bc 73a8ff257e15 4488bd802d84 639f398a163a d795c528b8a8 2dc0bafab6b5 763a73e3ba26 1092c045c2b8 537ac7bc94fd a093949edbf9 6a5c93b6883a e1490ac6bef8 11be3013b57b 445e364e9cff d61bdd98dbb5 4967abaafd4b e064edf96800 69799af14d2b cad4676f8501 5a2d698e85df ffef27606300 53e347908f7e e8220b5c7dc1 f207bbed0521]
	I1229 07:51:12.824679   38571 ssh_runner.go:195] Run: docker stop fa2d4896312f 10cc8e4caf70 a4a4bb711fae 3892c6c0af78 3851ba81f40b d644dd7bd97e b96c765a8200 88eb1127f7bc 73a8ff257e15 4488bd802d84 639f398a163a d795c528b8a8 2dc0bafab6b5 763a73e3ba26 1092c045c2b8 537ac7bc94fd a093949edbf9 6a5c93b6883a e1490ac6bef8 11be3013b57b 445e364e9cff d61bdd98dbb5 4967abaafd4b e064edf96800 69799af14d2b cad4676f8501 5a2d698e85df ffef27606300 53e347908f7e e8220b5c7dc1 f207bbed0521
	I1229 07:51:12.848053   38571 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 07:51:12.866345   38571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:51:12.878659   38571 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:51:12.878680   38571 kubeadm.go:158] found existing configuration files:
	
	I1229 07:51:12.878746   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:51:12.889702   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:51:12.889781   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:51:12.901605   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:51:12.912779   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:51:12.912879   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:51:12.925699   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:51:12.936701   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:51:12.936764   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:51:12.949135   38571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:51:12.960275   38571 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:51:12.960339   38571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:51:12.972715   38571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:51:12.985305   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.111528   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.591093   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.836241   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:13.923699   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:14.041891   38571 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:51:14.041983   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:14.542156   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:15.042868   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:15.542333   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:16.043048   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:16.077971   38571 api_server.go:72] duration metric: took 2.036089879s to wait for apiserver process to appear ...
	I1229 07:51:16.078003   38571 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:51:16.078027   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:16.078655   38571 api_server.go:315] stopped: https://192.168.39.92:8443/healthz: Get "https://192.168.39.92:8443/healthz": dial tcp 192.168.39.92:8443: connect: connection refused
	I1229 07:51:16.578305   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:18.443388   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:51:18.443431   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:51:18.443450   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:18.484701   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:51:18.484737   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:51:18.579095   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:18.595577   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:51:18.595609   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:51:19.078258   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:19.087363   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:51:19.087390   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:51:19.579109   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:19.588877   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:51:19.588907   38571 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:51:20.078209   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:20.091375   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:51:20.102932   38571 api_server.go:141] control plane version: v1.35.0
	I1229 07:51:20.102963   38571 api_server.go:131] duration metric: took 4.024953885s to wait for apiserver health ...
	I1229 07:51:20.102971   38571 cni.go:84] Creating CNI manager for ""
	I1229 07:51:20.102975   38571 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I1229 07:51:20.105045   38571 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:51:20.106509   38571 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:51:20.129102   38571 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:51:20.129126   38571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:51:20.173712   38571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:51:20.944213   38571 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:51:20.950659   38571 system_pods.go:59] 12 kube-system pods found
	I1229 07:51:20.950722   38571 system_pods.go:61] "coredns-7d764666f9-gqqbx" [dd603e72-7da4-4f75-8c97-de4593e77af5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:51:20.950737   38571 system_pods.go:61] "etcd-multinode-178114" [c6fc7d5a-3d83-4382-8c01-91e4f8d8ad33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:51:20.950746   38571 system_pods.go:61] "kindnet-4t5sq" [6712d04a-a02b-425c-8659-11dec3a7d281] Running
	I1229 07:51:20.950753   38571 system_pods.go:61] "kindnet-5tphv" [a9324ef8-9243-4845-84cf-f27ee8a74693] Running
	I1229 07:51:20.950762   38571 system_pods.go:61] "kindnet-gwvxq" [07f4fe8b-bce0-4fd5-93cf-2de2803c6c14] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:51:20.950771   38571 system_pods.go:61] "kube-apiserver-multinode-178114" [e17c2dfb-5fb7-42ef-a235-edc0a3bb2c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:51:20.950784   38571 system_pods.go:61] "kube-controller-manager-multinode-178114" [a0c429d8-6d0a-4057-b3f4-ceef822a9ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:51:20.950815   38571 system_pods.go:61] "kube-proxy-2b4vx" [ade09c16-533b-4a8b-a136-3692d76cbfc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:51:20.950827   38571 system_pods.go:61] "kube-proxy-cv887" [881437d9-95e9-49e4-84c0-821225e76269] Running
	I1229 07:51:20.950833   38571 system_pods.go:61] "kube-proxy-z489z" [0d8a34cb-9fbc-4cae-b846-2dd702009fc0] Running
	I1229 07:51:20.950843   38571 system_pods.go:61] "kube-scheduler-multinode-178114" [704630b8-21f0-432d-89eb-3b3308e951c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:51:20.950854   38571 system_pods.go:61] "storage-provisioner" [8bdbcf14-1a5b-4150-9adc-728e41f9e652] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:51:20.950865   38571 system_pods.go:74] duration metric: took 6.622235ms to wait for pod list to return data ...
	I1229 07:51:20.950878   38571 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:51:20.953813   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:20.953850   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:20.953868   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:20.953875   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:20.953884   38571 node_conditions.go:105] duration metric: took 2.996676ms to run NodePressure ...
	I1229 07:51:20.953965   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:51:21.363980   38571 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1229 07:51:21.370528   38571 kubeadm.go:744] kubelet initialised
	I1229 07:51:21.370555   38571 kubeadm.go:745] duration metric: took 6.544349ms waiting for restarted kubelet to initialise ...
	I1229 07:51:21.370576   38571 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:51:21.397453   38571 ops.go:34] apiserver oom_adj: -16
	I1229 07:51:21.397480   38571 kubeadm.go:602] duration metric: took 8.621695264s to restartPrimaryControlPlane
	I1229 07:51:21.397492   38571 kubeadm.go:403] duration metric: took 8.65523758s to StartCluster
	I1229 07:51:21.397514   38571 settings.go:142] acquiring lock: {Name:mk265ccb37cd5f502c9a7085e916ed505357de62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:21.397588   38571 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:51:21.398128   38571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9552/kubeconfig: {Name:mk3da1041c6a3b33ce0ec44cd869a994fb20af51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:21.398334   38571 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1229 07:51:21.398443   38571 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:51:21.398681   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:21.400069   38571 out.go:179] * Verifying Kubernetes components...
	I1229 07:51:21.400069   38571 out.go:179] * Enabled addons: 
	I1229 07:51:21.401193   38571 addons.go:530] duration metric: took 2.755662ms for enable addons: enabled=[]
	I1229 07:51:21.401252   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:21.673313   38571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:21.693823   38571 node_ready.go:35] waiting up to 6m0s for node "multinode-178114" to be "Ready" ...
	W1229 07:51:23.698318   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	W1229 07:51:26.197364   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	W1229 07:51:28.198652   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	W1229 07:51:30.199125   38571 node_ready.go:57] node "multinode-178114" has "Ready":"False" status (will retry)
	I1229 07:51:31.697220   38571 node_ready.go:49] node "multinode-178114" is "Ready"
	I1229 07:51:31.697265   38571 node_ready.go:38] duration metric: took 10.003370599s for node "multinode-178114" to be "Ready" ...
	I1229 07:51:31.697287   38571 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:51:31.697351   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:31.718089   38571 api_server.go:72] duration metric: took 10.319712382s to wait for apiserver process to appear ...
	I1229 07:51:31.718118   38571 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:51:31.718140   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:31.724039   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:51:31.725170   38571 api_server.go:141] control plane version: v1.35.0
	I1229 07:51:31.725201   38571 api_server.go:131] duration metric: took 7.075007ms to wait for apiserver health ...
	I1229 07:51:31.725214   38571 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:51:31.729667   38571 system_pods.go:59] 12 kube-system pods found
	I1229 07:51:31.729704   38571 system_pods.go:61] "coredns-7d764666f9-gqqbx" [dd603e72-7da4-4f75-8c97-de4593e77af5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:51:31.729713   38571 system_pods.go:61] "etcd-multinode-178114" [c6fc7d5a-3d83-4382-8c01-91e4f8d8ad33] Running
	I1229 07:51:31.729721   38571 system_pods.go:61] "kindnet-4t5sq" [6712d04a-a02b-425c-8659-11dec3a7d281] Running
	I1229 07:51:31.729763   38571 system_pods.go:61] "kindnet-5tphv" [a9324ef8-9243-4845-84cf-f27ee8a74693] Running
	I1229 07:51:31.729771   38571 system_pods.go:61] "kindnet-gwvxq" [07f4fe8b-bce0-4fd5-93cf-2de2803c6c14] Running
	I1229 07:51:31.729784   38571 system_pods.go:61] "kube-apiserver-multinode-178114" [e17c2dfb-5fb7-42ef-a235-edc0a3bb2c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:51:31.729813   38571 system_pods.go:61] "kube-controller-manager-multinode-178114" [a0c429d8-6d0a-4057-b3f4-ceef822a9ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:51:31.729825   38571 system_pods.go:61] "kube-proxy-2b4vx" [ade09c16-533b-4a8b-a136-3692d76cbfc9] Running
	I1229 07:51:31.729832   38571 system_pods.go:61] "kube-proxy-cv887" [881437d9-95e9-49e4-84c0-821225e76269] Running
	I1229 07:51:31.729838   38571 system_pods.go:61] "kube-proxy-z489z" [0d8a34cb-9fbc-4cae-b846-2dd702009fc0] Running
	I1229 07:51:31.729844   38571 system_pods.go:61] "kube-scheduler-multinode-178114" [704630b8-21f0-432d-89eb-3b3308e951c7] Running
	I1229 07:51:31.729853   38571 system_pods.go:61] "storage-provisioner" [8bdbcf14-1a5b-4150-9adc-728e41f9e652] Running
	I1229 07:51:31.729862   38571 system_pods.go:74] duration metric: took 4.639441ms to wait for pod list to return data ...
	I1229 07:51:31.729876   38571 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:51:31.733788   38571 default_sa.go:45] found service account: "default"
	I1229 07:51:31.733832   38571 default_sa.go:55] duration metric: took 3.949029ms for default service account to be created ...
	I1229 07:51:31.733846   38571 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:51:31.737599   38571 system_pods.go:86] 12 kube-system pods found
	I1229 07:51:31.737636   38571 system_pods.go:89] "coredns-7d764666f9-gqqbx" [dd603e72-7da4-4f75-8c97-de4593e77af5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:51:31.737645   38571 system_pods.go:89] "etcd-multinode-178114" [c6fc7d5a-3d83-4382-8c01-91e4f8d8ad33] Running
	I1229 07:51:31.737653   38571 system_pods.go:89] "kindnet-4t5sq" [6712d04a-a02b-425c-8659-11dec3a7d281] Running
	I1229 07:51:31.737657   38571 system_pods.go:89] "kindnet-5tphv" [a9324ef8-9243-4845-84cf-f27ee8a74693] Running
	I1229 07:51:31.737662   38571 system_pods.go:89] "kindnet-gwvxq" [07f4fe8b-bce0-4fd5-93cf-2de2803c6c14] Running
	I1229 07:51:31.737671   38571 system_pods.go:89] "kube-apiserver-multinode-178114" [e17c2dfb-5fb7-42ef-a235-edc0a3bb2c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:51:31.737681   38571 system_pods.go:89] "kube-controller-manager-multinode-178114" [a0c429d8-6d0a-4057-b3f4-ceef822a9ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:51:31.737691   38571 system_pods.go:89] "kube-proxy-2b4vx" [ade09c16-533b-4a8b-a136-3692d76cbfc9] Running
	I1229 07:51:31.737697   38571 system_pods.go:89] "kube-proxy-cv887" [881437d9-95e9-49e4-84c0-821225e76269] Running
	I1229 07:51:31.737702   38571 system_pods.go:89] "kube-proxy-z489z" [0d8a34cb-9fbc-4cae-b846-2dd702009fc0] Running
	I1229 07:51:31.737711   38571 system_pods.go:89] "kube-scheduler-multinode-178114" [704630b8-21f0-432d-89eb-3b3308e951c7] Running
	I1229 07:51:31.737716   38571 system_pods.go:89] "storage-provisioner" [8bdbcf14-1a5b-4150-9adc-728e41f9e652] Running
	I1229 07:51:31.737727   38571 system_pods.go:126] duration metric: took 3.873327ms to wait for k8s-apps to be running ...
	I1229 07:51:31.737746   38571 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:51:31.737820   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:51:31.755088   38571 system_svc.go:56] duration metric: took 17.333941ms WaitForService to wait for kubelet
	I1229 07:51:31.755119   38571 kubeadm.go:587] duration metric: took 10.356757748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:51:31.755140   38571 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:51:31.758227   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:31.758250   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:31.758264   38571 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1229 07:51:31.758269   38571 node_conditions.go:123] node cpu capacity is 2
	I1229 07:51:31.758275   38571 node_conditions.go:105] duration metric: took 3.129916ms to run NodePressure ...
	I1229 07:51:31.758288   38571 start.go:242] waiting for startup goroutines ...
	I1229 07:51:31.758299   38571 start.go:247] waiting for cluster config update ...
	I1229 07:51:31.758311   38571 start.go:256] writing updated cluster config ...
	I1229 07:51:31.760550   38571 out.go:203] 
	I1229 07:51:31.762212   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:31.762323   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:31.764028   38571 out.go:179] * Starting "multinode-178114-m02" worker node in "multinode-178114" cluster
	I1229 07:51:31.765543   38571 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:51:31.765568   38571 cache.go:65] Caching tarball of preloaded images
	I1229 07:51:31.765696   38571 preload.go:251] Found /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1229 07:51:31.765714   38571 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:51:31.765854   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:31.766093   38571 start.go:360] acquireMachinesLock for multinode-178114-m02: {Name:mk15f2078da2c2dd9529f5e9a0dd3e4cc97196c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1229 07:51:31.766158   38571 start.go:364] duration metric: took 41.249µs to acquireMachinesLock for "multinode-178114-m02"
	I1229 07:51:31.766180   38571 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:51:31.766188   38571 fix.go:54] fixHost starting: m02
	I1229 07:51:31.767951   38571 fix.go:112] recreateIfNeeded on multinode-178114-m02: state=Stopped err=<nil>
	W1229 07:51:31.767987   38571 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:51:31.769865   38571 out.go:252] * Restarting existing kvm2 VM for "multinode-178114-m02" ...
	I1229 07:51:31.769919   38571 main.go:144] libmachine: starting domain...
	I1229 07:51:31.769930   38571 main.go:144] libmachine: ensuring networks are active...
	I1229 07:51:31.771017   38571 main.go:144] libmachine: Ensuring network default is active
	I1229 07:51:31.771435   38571 main.go:144] libmachine: Ensuring network mk-multinode-178114 is active
	I1229 07:51:31.771905   38571 main.go:144] libmachine: getting domain XML...
	I1229 07:51:31.773138   38571 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>multinode-178114-m02</name>
	  <uuid>235b672c-4d78-42ab-b738-854b9695842e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/multinode-178114-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a6:c7:ce'/>
	      <source network='mk-multinode-178114'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7c:63:d5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1229 07:51:33.090100   38571 main.go:144] libmachine: waiting for domain to start...
	I1229 07:51:33.091989   38571 main.go:144] libmachine: domain is now running
	I1229 07:51:33.092014   38571 main.go:144] libmachine: waiting for IP...
	I1229 07:51:33.092777   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.093388   38571 main.go:144] libmachine: domain multinode-178114-m02 has current primary IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.093405   38571 main.go:144] libmachine: found domain IP: 192.168.39.61
	I1229 07:51:33.093413   38571 main.go:144] libmachine: reserving static IP address...
	I1229 07:51:33.093903   38571 main.go:144] libmachine: found host DHCP lease matching {name: "multinode-178114-m02", mac: "52:54:00:a6:c7:ce", ip: "192.168.39.61"} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:49:14 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:33.093935   38571 main.go:144] libmachine: skip adding static IP to network mk-multinode-178114 - found existing host DHCP lease matching {name: "multinode-178114-m02", mac: "52:54:00:a6:c7:ce", ip: "192.168.39.61"}
	I1229 07:51:33.093949   38571 main.go:144] libmachine: reserved static IP address 192.168.39.61 for domain multinode-178114-m02
	I1229 07:51:33.093959   38571 main.go:144] libmachine: waiting for SSH...
	I1229 07:51:33.093966   38571 main.go:144] libmachine: Getting to WaitForSSH function...
	I1229 07:51:33.096568   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.097104   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:49:14 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:33.097139   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:33.097392   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:33.097579   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:33.097588   38571 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1229 07:51:36.199067   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.61:22: connect: no route to host
	I1229 07:51:42.279096   38571 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.39.61:22: connect: no route to host
	I1229 07:51:45.381551   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:45.385232   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.385704   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.385730   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.386098   38571 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/config.json ...
	I1229 07:51:45.386314   38571 machine.go:94] provisionDockerMachine start ...
	I1229 07:51:45.388606   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.389148   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.389172   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.389342   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:45.389521   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:45.389531   38571 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:51:45.490500   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1229 07:51:45.490531   38571 buildroot.go:166] provisioning hostname "multinode-178114-m02"
	I1229 07:51:45.493053   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.493445   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.493466   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.493619   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:45.493862   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:45.493877   38571 main.go:144] libmachine: About to run SSH command:
	sudo hostname multinode-178114-m02 && echo "multinode-178114-m02" | sudo tee /etc/hostname
	I1229 07:51:45.614273   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: multinode-178114-m02
	
	I1229 07:51:45.617481   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.618016   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.618062   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.618279   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:45.618509   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:45.618527   38571 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-178114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-178114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-178114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:51:45.734666   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:45.734698   38571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9552/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9552/.minikube}
	I1229 07:51:45.734721   38571 buildroot.go:174] setting up certificates
	I1229 07:51:45.734732   38571 provision.go:84] configureAuth start
	I1229 07:51:45.737442   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.737828   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.737851   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.740417   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.740777   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.740825   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.740969   38571 provision.go:143] copyHostCerts
	I1229 07:51:45.740996   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:45.741030   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem, removing ...
	I1229 07:51:45.741048   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem
	I1229 07:51:45.741134   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/cert.pem (1123 bytes)
	I1229 07:51:45.741231   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:45.741257   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem, removing ...
	I1229 07:51:45.741268   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem
	I1229 07:51:45.741309   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/key.pem (1675 bytes)
	I1229 07:51:45.741369   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:45.741406   38571 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem, removing ...
	I1229 07:51:45.741416   38571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem
	I1229 07:51:45.741452   38571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9552/.minikube/ca.pem (1082 bytes)
	I1229 07:51:45.741515   38571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem org=jenkins.multinode-178114-m02 san=[127.0.0.1 192.168.39.61 localhost minikube multinode-178114-m02]
	I1229 07:51:45.939677   38571 provision.go:177] copyRemoteCerts
	I1229 07:51:45.939763   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:51:45.942600   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.943036   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:45.943071   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:45.943207   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:46.026486   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:51:46.026556   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1229 07:51:46.058604   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:51:46.058680   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:51:46.091269   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:51:46.091350   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:51:46.123837   38571 provision.go:87] duration metric: took 389.088504ms to configureAuth
	I1229 07:51:46.123867   38571 buildroot.go:189] setting minikube options for container-runtime
	I1229 07:51:46.124137   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:46.126902   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.127295   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:46.127318   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.127516   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.127761   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:46.127772   38571 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:51:46.229914   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1229 07:51:46.229949   38571 buildroot.go:70] root file system type: tmpfs
	I1229 07:51:46.230152   38571 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:51:46.233863   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.234532   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:46.234560   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.234729   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.234970   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:46.235024   38571 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.39.92"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:51:46.371964   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.39.92
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:51:46.375244   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.375719   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:46.375740   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:46.375936   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.376190   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:46.376219   38571 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:51:47.365467   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1229 07:51:47.365501   38571 machine.go:97] duration metric: took 1.979172812s to provisionDockerMachine
	I1229 07:51:47.365525   38571 start.go:293] postStartSetup for "multinode-178114-m02" (driver="kvm2")
	I1229 07:51:47.365544   38571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:51:47.365614   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:51:47.369102   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.369609   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.369658   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.369858   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:47.454628   38571 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:51:47.460096   38571 info.go:137] Remote host: Buildroot 2025.02
	I1229 07:51:47.460129   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/addons for local assets ...
	I1229 07:51:47.460214   38571 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9552/.minikube/files for local assets ...
	I1229 07:51:47.460300   38571 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> 134862.pem in /etc/ssl/certs
	I1229 07:51:47.460314   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /etc/ssl/certs/134862.pem
	I1229 07:51:47.460446   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:51:47.474150   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:47.504508   38571 start.go:296] duration metric: took 138.968113ms for postStartSetup
	I1229 07:51:47.504548   38571 fix.go:56] duration metric: took 15.73836213s for fixHost
	I1229 07:51:47.507872   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.508342   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.508376   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.508578   38571 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:47.508890   38571 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1229 07:51:47.508907   38571 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1229 07:51:47.612534   38571 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766994707.579723900
	
	I1229 07:51:47.612568   38571 fix.go:216] guest clock: 1766994707.579723900
	I1229 07:51:47.612580   38571 fix.go:229] Guest: 2025-12-29 07:51:47.5797239 +0000 UTC Remote: 2025-12-29 07:51:47.504552719 +0000 UTC m=+54.455724724 (delta=75.171181ms)
	I1229 07:51:47.612607   38571 fix.go:200] guest clock delta is within tolerance: 75.171181ms
	I1229 07:51:47.612613   38571 start.go:83] releasing machines lock for "multinode-178114-m02", held for 15.846442111s
	I1229 07:51:47.615681   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.616222   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.616256   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.618813   38571 out.go:179] * Found network options:
	I1229 07:51:47.620409   38571 out.go:179]   - NO_PROXY=192.168.39.92
	W1229 07:51:47.621860   38571 proxy.go:120] fail to check proxy env: Error ip not in block
	W1229 07:51:47.622320   38571 proxy.go:120] fail to check proxy env: Error ip not in block
	I1229 07:51:47.622422   38571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 07:51:47.622467   38571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:51:47.625959   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626049   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626524   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.626553   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626568   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:47.626601   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:47.626763   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:47.626768   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	W1229 07:51:47.726024   38571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:51:47.726098   38571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:51:47.746914   38571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:51:47.746951   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:47.746985   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:47.747112   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:47.769910   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:51:47.783288   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:51:47.796442   38571 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:51:47.796524   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:51:47.809923   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:47.822747   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:51:47.835679   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:51:47.849902   38571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:51:47.863549   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:51:47.877168   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:51:47.891472   38571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:51:47.904645   38571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:51:47.916418   38571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1229 07:51:47.916488   38571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1229 07:51:47.930238   38571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:51:47.942772   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:48.101330   38571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:51:48.143872   38571 start.go:496] detecting cgroup driver to use...
	I1229 07:51:48.143913   38571 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1229 07:51:48.143982   38571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:51:48.164641   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:48.186731   38571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:51:48.212579   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:48.229812   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:48.247059   38571 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:51:48.277023   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:51:48.299961   38571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:48.333362   38571 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:51:48.338889   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:51:48.356729   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:51:48.379319   38571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:51:48.548034   38571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:51:48.709643   38571 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:51:48.709689   38571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:51:48.733488   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:51:48.750952   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:48.912548   38571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:51:49.511196   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:51:49.528962   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:51:49.545643   38571 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1229 07:51:49.564777   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:49.582256   38571 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:51:49.727380   38571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:51:49.886923   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:50.040853   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:51:50.086598   38571 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:51:50.104551   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:50.255421   38571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:51:50.377138   38571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:51:50.406637   38571 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:51:50.406717   38571 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:51:50.416041   38571 start.go:574] Will wait 60s for crictl version
	I1229 07:51:50.416128   38571 ssh_runner.go:195] Run: which crictl
	I1229 07:51:50.420871   38571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:51:50.463224   38571 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1229 07:51:50.463305   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:50.494374   38571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:51:50.524155   38571 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 28.5.2 ...
	I1229 07:51:50.525817   38571 out.go:179]   - env NO_PROXY=192.168.39.92
	I1229 07:51:50.529771   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:50.530188   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:50.530210   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:50.530394   38571 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1229 07:51:50.535249   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:50.551107   38571 mustload.go:66] Loading cluster: multinode-178114
	I1229 07:51:50.551401   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:50.552904   38571 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:51:50.553137   38571 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114 for IP: 192.168.39.61
	I1229 07:51:50.553149   38571 certs.go:195] generating shared ca certs ...
	I1229 07:51:50.553162   38571 certs.go:227] acquiring lock for ca certs: {Name:mke00d9bdd9ac6280bcf2843fe76ff41695d9199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:50.553279   38571 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key
	I1229 07:51:50.553329   38571 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key
	I1229 07:51:50.553354   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:51:50.553378   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:51:50.553400   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:51:50.553415   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:51:50.553474   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem (1338 bytes)
	W1229 07:51:50.553508   38571 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486_empty.pem, impossibly tiny 0 bytes
	I1229 07:51:50.553516   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:51:50.553541   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:51:50.553564   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:51:50.553588   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/key.pem (1675 bytes)
	I1229 07:51:50.553627   38571 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem (1708 bytes)
	I1229 07:51:50.553656   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem -> /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.553669   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem -> /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.553681   38571 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.553703   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:51:50.586711   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:51:50.620415   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:51:50.653838   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:51:50.688575   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/certs/13486.pem --> /usr/share/ca-certificates/13486.pem (1338 bytes)
	I1229 07:51:50.720377   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/ssl/certs/134862.pem --> /usr/share/ca-certificates/134862.pem (1708 bytes)
	I1229 07:51:50.751255   38571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:51:50.782045   38571 ssh_runner.go:195] Run: openssl version
	I1229 07:51:50.791331   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.804789   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/134862.pem /etc/ssl/certs/134862.pem
	I1229 07:51:50.818071   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.824552   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.824642   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134862.pem
	I1229 07:51:50.833293   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:50.847330   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/134862.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:50.860707   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.874005   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:51:50.888500   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.894902   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.894976   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.902998   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:51:50.915886   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:51:50.929184   38571 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.942117   38571 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13486.pem /etc/ssl/certs/13486.pem
	I1229 07:51:50.956133   38571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.961892   38571 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.961968   38571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13486.pem
	I1229 07:51:50.969851   38571 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:51:50.982872   38571 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13486.pem /etc/ssl/certs/51391683.0
	I1229 07:51:50.998064   38571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:51:51.003603   38571 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:51:51.003656   38571 kubeadm.go:935] updating node {m02 192.168.39.61 8443 v1.35.0 docker false true} ...
	I1229 07:51:51.003837   38571 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-178114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:51:51.003929   38571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:51:51.019065   38571 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:51:51.019176   38571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1229 07:51:51.032048   38571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1229 07:51:51.055537   38571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:51:51.079105   38571 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1229 07:51:51.084260   38571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:51.100209   38571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:51.248104   38571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:51.292715   38571 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:51:51.293054   38571 start.go:318] joinCluster: &{Name:multinode-178114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0
ClusterName:multinode-178114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:51:51.293151   38571 start.go:331] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1229 07:51:51.293168   38571 host.go:66] Checking if "multinode-178114-m02" exists ...
	I1229 07:51:51.293340   38571 mustload.go:66] Loading cluster: multinode-178114
	I1229 07:51:51.293485   38571 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:51:51.295575   38571 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:51:51.295931   38571 api_server.go:166] Checking apiserver status ...
	I1229 07:51:51.295996   38571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:51:51.299073   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:51.299607   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:51.299652   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:51.299856   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:51.400779   38571 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2188/cgroup
	I1229 07:51:51.413982   38571 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2188/cgroup
	I1229 07:51:51.426506   38571 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38e8599b7482ef471a759bacef66e1e4.slice/docker-6ac14357a37d6809d0e08e3dae47a1557c5265469c560bda9efb32d6b0ddc97d.scope/cgroup.freeze
	I1229 07:51:51.440271   38571 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:51:51.445141   38571 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:51:51.445214   38571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl drain multinode-178114-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I1229 07:51:54.593019   38571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl drain multinode-178114-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.147758209s)
	I1229 07:51:54.593050   38571 node.go:129] successfully drained node "multinode-178114-m02"
	I1229 07:51:54.593120   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I1229 07:51:54.596278   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:54.596851   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:43 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:51:54.596883   38571 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:51:54.597048   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:51:54.926513   38571 node.go:156] successfully reset node "multinode-178114-m02"
	I1229 07:51:54.927076   38571 kapi.go:59] client config for multinode-178114: &rest.Config{Host:"https://192.168.39.92:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/profiles/multinode-178114/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:51:54.934420   38571 node.go:181] successfully deleted node "multinode-178114-m02"
	I1229 07:51:54.934444   38571 start.go:335] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1229 07:51:54.934517   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1229 07:51:54.937993   38571 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:54.938552   38571 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:51:04 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:51:54.938591   38571 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:51:54.938808   38571 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:51:55.089264   38571 start.go:344] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1229 07:51:55.089362   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nsk4hi.3w75j2rtid1l0ik3 --discovery-token-ca-cert-hash sha256:47f6a2f9bf9c65c35fcfecaaac32e7befdc059ab272131834f3128167a677a66 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-178114-m02"
	I1229 07:51:56.060370   38571 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1229 07:51:56.406702   38571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-178114-m02 minikube.k8s.io/updated_at=2025_12_29T07_51_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=multinode-178114 minikube.k8s.io/primary=false
	I1229 07:51:56.488264   38571 start.go:320] duration metric: took 5.195213102s to joinCluster
	I1229 07:51:56.490583   38571 out.go:203] 
	W1229 07:51:56.492051   38571 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: join node to cluster: error applying worker node "m02" label: apply node labels: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-178114-m02 minikube.k8s.io/updated_at=2025_12_29T07_51_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=multinode-178114 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-178114-m02" not found
	
	W1229 07:51:56.492069   38571 out.go:285] * 
	W1229 07:51:56.492328   38571 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:51:56.493643   38571 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:51:11 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:51:11 multinode-178114 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-gqqbx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a4a4bb711fae1c6ea130df540fa5dd9aeac6d7756d6cd0c2b924734b363973e8\""
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-gqqbx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e1490ac6bef88c98762e8239d7077f744731b871a97bf92a3fea1d9fd2cb8b43\""
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-769dd8b7dd-4dk2b_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b1c902506953f2e8d6291e1da259b5c14f65a29fef95b14a48ae4f82ad68c6db\""
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-769dd8b7dd-4dk2b_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cd995693d7cb0b8517c4c10f35af4012ff122690f576e118488dcdfff2f20120\""
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ffef2760630057ce553216bee61a31205ea0007450eda71d489faf90f854dfa3\". Proceed without further sandbox information."
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"e8220b5c7dc1cb6ba62099ada9e91a28fdc7f6ad3d788042250e5f4cc0f68f4a\". Proceed without further sandbox information."
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"f207bbed05210bc5111a206efa9c2e7afc4cc1f0a7447ca4c7f7b4717f2d6ca6\". Proceed without further sandbox information."
	Dec 29 07:51:14 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:14Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"53e347908f7ef7aa794ec9d1fa90bed97ab8655f432cb3d99a3a6d70441ca505\". Proceed without further sandbox information."
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-gqqbx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a4a4bb711fae1c6ea130df540fa5dd9aeac6d7756d6cd0c2b924734b363973e8\""
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-gqqbx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e1490ac6bef88c98762e8239d7077f744731b871a97bf92a3fea1d9fd2cb8b43\""
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-769dd8b7dd-4dk2b_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b1c902506953f2e8d6291e1da259b5c14f65a29fef95b14a48ae4f82ad68c6db\""
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-769dd8b7dd-4dk2b_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cd995693d7cb0b8517c4c10f35af4012ff122690f576e118488dcdfff2f20120\""
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4fa673ee7c26ef1d8975f0aa04f1d0ef9313c29fc4fb45c1d969569d8e7d4605/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a5b3e26087ee151e4a0c65a1a0ea5aff2e984af2369b91b810c3c3f650858720/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a0c33a4bcd802c5e33a781b47f77ba671027a176fa92357bce787f908f1826a/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:15 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/961394d4fb6a4903651ad2594369c9244e0244d47a61b5fc0a4fd4214b1be44d/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:18 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:18Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 29 07:51:19 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9f9b6df823c79f09dde5c7ddc7817ce1ce36d48e0379d59ffa4990d364cf8e3a/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:19 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9cc90b36543950cb3c7fb3b19582380c011927492dbaeb192e3442d09e67b2e2/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:19 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7e7767d321f0b12f0074637c1c153e85b3fc48f490d33b00f8bf5a723a0878a4/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:35 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dcd534f4697c36c8cc502df917f15b76395c83445909d030bc1cdae0ecaa96d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 29 07:51:35 multinode-178114 cri-dockerd[1551]: time="2025-12-29T07:51:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/744dcaa8b6559d27d99adb2726a6d6fae31e363885ac8210aa1b8f2ab3b26f0c/resolv.conf as [nameserver 192.168.122.1]"
	Dec 29 07:51:50 multinode-178114 dockerd[1156]: time="2025-12-29T07:51:50.414474019Z" level=info msg="ignoring event" container=1dbf6f0ac3081188a1c7f433d3c52bd8a32bd2fd57c4699ddd2a999e9d1eabdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	e1c5b6eca24cc       aa5e3ebc0dfed       22 seconds ago      Running             coredns                   2                   744dcaa8b6559       coredns-7d764666f9-gqqbx                   kube-system
	22ffa8b755f31       8c811b4aec35f       22 seconds ago      Running             busybox                   2                   4dcd534f4697c       busybox-769dd8b7dd-4dk2b                   default
	d1d904ac514a4       4921d7a6dffa9       38 seconds ago      Running             kindnet-cni               2                   7e7767d321f0b       kindnet-gwvxq                              kube-system
	1dbf6f0ac3081       6e38f40d628db       38 seconds ago      Exited              storage-provisioner       3                   9cc90b3654395       storage-provisioner                        kube-system
	1a991010cd938       32652ff1bbe6b       38 seconds ago      Running             kube-proxy                2                   9f9b6df823c79       kube-proxy-2b4vx                           kube-system
	00e542ea8b4c7       0a108f7189562       42 seconds ago      Running             etcd                      2                   961394d4fb6a4       etcd-multinode-178114                      kube-system
	e228af10eaa8e       550794e3b12ac       42 seconds ago      Running             kube-scheduler            2                   3a0c33a4bcd80       kube-scheduler-multinode-178114            kube-system
	df32ecd001ca4       2c9a4b058bd7e       42 seconds ago      Running             kube-controller-manager   2                   a5b3e26087ee1       kube-controller-manager-multinode-178114   kube-system
	6ac14357a37d6       5c6acd67e9cd1       42 seconds ago      Running             kube-apiserver            2                   4fa673ee7c26e       kube-apiserver-multinode-178114            kube-system
	10cc8e4caf70f       aa5e3ebc0dfed       2 minutes ago       Exited              coredns                   1                   a4a4bb711fae1       coredns-7d764666f9-gqqbx                   kube-system
	caa87a6b4e9f5       8c811b4aec35f       2 minutes ago       Exited              busybox                   1                   b1c902506953f       busybox-769dd8b7dd-4dk2b                   default
	3892c6c0af785       4921d7a6dffa9       3 minutes ago       Exited              kindnet-cni               1                   88eb1127f7bc7       kindnet-gwvxq                              kube-system
	d644dd7bd97ec       32652ff1bbe6b       3 minutes ago       Exited              kube-proxy                1                   73a8ff257e159       kube-proxy-2b4vx                           kube-system
	4488bd802d843       0a108f7189562       3 minutes ago       Exited              etcd                      1                   763a73e3ba26c       etcd-multinode-178114                      kube-system
	639f398a163a3       550794e3b12ac       3 minutes ago       Exited              kube-scheduler            1                   1092c045c2b83       kube-scheduler-multinode-178114            kube-system
	d795c528b8a83       2c9a4b058bd7e       3 minutes ago       Exited              kube-controller-manager   1                   537ac7bc94fd9       kube-controller-manager-multinode-178114   kube-system
	2dc0bafab6b56       5c6acd67e9cd1       3 minutes ago       Exited              kube-apiserver            1                   a093949edbf9b       kube-apiserver-multinode-178114            kube-system
	
	
	==> coredns [10cc8e4caf70] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50127 - 31358 "HINFO IN 5290636270364652393.3258819713363976087. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060912989s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e1c5b6eca24c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49636 - 47404 "HINFO IN 2375192899720851288.2420464807317320366. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.089544201s
	
	
	==> describe nodes <==
	Name:               multinode-178114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=multinode-178114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_44_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:44:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178114
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:51:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:51:31 +0000   Mon, 29 Dec 2025 07:44:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:51:31 +0000   Mon, 29 Dec 2025 07:44:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:51:31 +0000   Mon, 29 Dec 2025 07:44:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:51:31 +0000   Mon, 29 Dec 2025 07:51:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    multinode-178114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae972118d57b4c37b972fae087082f1e
	  System UUID:                ae972118-d57b-4c37-b972-fae087082f1e
	  Boot ID:                    bdbd1132-8460-4bb0-8e6b-728e7841b3a0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-4dk2b                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 coredns-7d764666f9-gqqbx                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     6m56s
	  kube-system                 etcd-multinode-178114                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         7m2s
	  kube-system                 kindnet-gwvxq                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      6m56s
	  kube-system                 kube-apiserver-multinode-178114             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 kube-controller-manager-multinode-178114    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 kube-proxy-2b4vx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-scheduler-multinode-178114             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  6m57s  node-controller  Node multinode-178114 event: Registered Node multinode-178114 in Controller
	  Normal  RegisteredNode  3m5s   node-controller  Node multinode-178114 event: Registered Node multinode-178114 in Controller
	  Normal  RegisteredNode  36s    node-controller  Node multinode-178114 event: Registered Node multinode-178114 in Controller
	
	
	Name:               multinode-178114-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178114-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:51:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178114-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:51:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:51:56 +0000   Mon, 29 Dec 2025 07:51:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:51:56 +0000   Mon, 29 Dec 2025 07:51:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:51:56 +0000   Mon, 29 Dec 2025 07:51:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:51:56 +0000   Mon, 29 Dec 2025 07:51:56 +0000   KubeletNotReady              [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, CSINode is not yet initialized]
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    multinode-178114-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 235b672c4d7842abb738854b9695842e
	  System UUID:                235b672c-4d78-42ab-b738-854b9695842e
	  Boot ID:                    1338223c-c5e2-43aa-b26a-fdae8f413b2a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-jq8st    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kindnet-5tphv               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      6m12s
	  kube-system                 kube-proxy-cv887            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:              <none>
	
	
	==> dmesg <==
	[Dec29 07:50] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec29 07:51] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001823] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.162053] crun[387]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.750988] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.110232] kauditd_printk_skb: 161 callbacks suppressed
	[  +1.826780] kauditd_printk_skb: 372 callbacks suppressed
	[  +5.440602] kauditd_printk_skb: 284 callbacks suppressed
	[ +13.439890] kauditd_printk_skb: 80 callbacks suppressed
	[ +14.727823] kauditd_printk_skb: 119 callbacks suppressed
	
	
	==> etcd [00e542ea8b4c] <==
	{"level":"info","ts":"2025-12-29T07:51:16.654482Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2025-12-29T07:51:16.651765Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:51:16.655468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"d468df581a6d993d switched to configuration voters=(15305728903112137021)"}
	{"level":"info","ts":"2025-12-29T07:51:16.656111Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"f0381c3cc77c8c9d","local-member-id":"d468df581a6d993d","added-peer-id":"d468df581a6d993d","added-peer-peer-urls":["https://192.168.39.92:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:51:16.656412Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"f0381c3cc77c8c9d","local-member-id":"d468df581a6d993d","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:51:16.655903Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"d468df581a6d993d","initial-advertise-peer-urls":["https://192.168.39.92:2380"],"listen-peer-urls":["https://192.168.39.92:2380"],"advertise-client-urls":["https://192.168.39.92:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.92:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:51:16.655919Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:51:17.007639Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"d468df581a6d993d is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-29T07:51:17.007689Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"d468df581a6d993d became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:51:17.007748Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"d468df581a6d993d received MsgPreVoteResp from d468df581a6d993d at term 3"}
	{"level":"info","ts":"2025-12-29T07:51:17.007761Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"d468df581a6d993d has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:51:17.007783Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"d468df581a6d993d became candidate at term 4"}
	{"level":"info","ts":"2025-12-29T07:51:17.013053Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"d468df581a6d993d received MsgVoteResp from d468df581a6d993d at term 4"}
	{"level":"info","ts":"2025-12-29T07:51:17.013553Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"d468df581a6d993d has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:51:17.013660Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"d468df581a6d993d became leader at term 4"}
	{"level":"info","ts":"2025-12-29T07:51:17.013672Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: d468df581a6d993d elected leader d468df581a6d993d at term 4"}
	{"level":"info","ts":"2025-12-29T07:51:17.015720Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"d468df581a6d993d","local-member-attributes":"{Name:multinode-178114 ClientURLs:[https://192.168.39.92:2379]}","cluster-id":"f0381c3cc77c8c9d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:51:17.015961Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:51:17.017037Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:51:17.017082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:51:17.017163Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:51:17.019683Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:51:17.021964Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:51:17.024533Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.92:2379"}
	{"level":"info","ts":"2025-12-29T07:51:17.025386Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [4488bd802d84] <==
	{"level":"info","ts":"2025-12-29T07:48:48.173305Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:48:48.173675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:48:48.173821Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:48:48.176537Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:48:48.178277Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:48:48.179434Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:48:48.179518Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.92:2379"}
	{"level":"info","ts":"2025-12-29T07:50:39.476252Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-29T07:50:39.477110Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"multinode-178114","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.92:2380"],"advertise-client-urls":["https://192.168.39.92:2379"]}
	{"level":"error","ts":"2025-12-29T07:50:39.477481Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-29T07:50:46.489545Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-29T07:50:46.489661Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:50:46.489693Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d468df581a6d993d","current-leader-member-id":"d468df581a6d993d"}
	{"level":"info","ts":"2025-12-29T07:50:46.489944Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-29T07:50:46.489967Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-29T07:50:46.490767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-29T07:50:46.490850Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-29T07:50:46.490884Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-29T07:50:46.490933Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.92:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-29T07:50:46.490944Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.92:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-29T07:50:46.490948Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.92:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:50:46.494611Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"error","ts":"2025-12-29T07:50:46.494795Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.92:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:50:46.494866Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2025-12-29T07:50:46.494889Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"multinode-178114","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.92:2380"],"advertise-client-urls":["https://192.168.39.92:2379"]}
	
	
	==> kernel <==
	 07:51:57 up 1 min,  0 users,  load average: 1.35, 0.43, 0.15
	Linux multinode-178114 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [3892c6c0af78] <==
	I1229 07:49:52.897716       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:49:52.897763       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	I1229 07:50:02.896792       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1229 07:50:02.896890       1 main.go:324] Node multinode-178114-m03 has CIDR [10.244.3.0/24] 
	I1229 07:50:02.897479       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:50:02.897506       1 main.go:301] handling current node
	I1229 07:50:02.897530       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:50:02.897549       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	I1229 07:50:12.896143       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:50:12.896274       1 main.go:301] handling current node
	I1229 07:50:12.896304       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:50:12.896313       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	I1229 07:50:12.896734       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1229 07:50:12.896911       1 main.go:324] Node multinode-178114-m03 has CIDR [10.244.2.0/24] 
	I1229 07:50:12.897240       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.176 Flags: [] Table: 0 Realm: 0} 
	I1229 07:50:22.896706       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:50:22.896762       1 main.go:301] handling current node
	I1229 07:50:22.896785       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:50:22.896792       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	I1229 07:50:22.897984       1 main.go:297] Handling node with IPs: map[192.168.39.176:{}]
	I1229 07:50:22.898011       1 main.go:324] Node multinode-178114-m03 has CIDR [10.244.2.0/24] 
	I1229 07:50:32.896996       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:50:32.897240       1 main.go:301] handling current node
	I1229 07:50:32.897262       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:50:32.897270       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d1d904ac514a] <==
	podIP = 192.168.39.92
	I1229 07:51:21.031057       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:51:21.031094       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:51:21.031108       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:51:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:51:21.507633       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:51:21.507697       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:51:21.507708       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:51:21.509804       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:51:21.909894       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:51:21.909934       1 metrics.go:72] Registering metrics
	I1229 07:51:21.910454       1 controller.go:711] "Syncing nftables rules"
	I1229 07:51:31.494494       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:51:31.494609       1 main.go:301] handling current node
	I1229 07:51:31.496394       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:51:31.496430       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	I1229 07:51:31.497000       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.61 Flags: [] Table: 0 Realm: 0} 
	I1229 07:51:41.494469       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:51:41.494516       1 main.go:301] handling current node
	I1229 07:51:41.494533       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:51:41.494539       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	I1229 07:51:51.495057       1 main.go:297] Handling node with IPs: map[192.168.39.92:{}]
	I1229 07:51:51.495126       1 main.go:301] handling current node
	I1229 07:51:51.495153       1 main.go:297] Handling node with IPs: map[192.168.39.61:{}]
	I1229 07:51:51.495160       1 main.go:324] Node multinode-178114-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2dc0bafab6b5] <==
	W1229 07:50:49.022173       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.033273       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.036004       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.046829       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.106584       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.117370       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.245014       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.277037       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.290301       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.308102       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.350534       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.366594       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.370494       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.381585       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.416529       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:50:49.438406       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1229 07:50:49.498826       1 controller.go:138] "Unable to delete lease" err="Delete \"https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-ozwzdluop76vquib7lb4t74y3m\": context deadline exceeded" lease="kube-system/apiserver-ozwzdluop76vquib7lb4t74y3m"
	I1229 07:50:49.499030       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	{"level":"warn","ts":"2025-12-29T07:50:49.499324Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00177c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1229 07:50:49.499549       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1229 07:50:49.499605       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1229 07:50:49.499752       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 110.472µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1229 07:50:49.500867       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1229 07:50:49.501121       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.792459ms" method="DELETE" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-ozwzdluop76vquib7lb4t74y3m" result=null
	W1229 07:50:49.517099       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6ac14357a37d] <==
	I1229 07:51:18.567475       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:51:18.570232       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:18.571064       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:51:18.571407       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:18.571475       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:51:18.571504       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:51:18.571513       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:51:18.571519       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:51:18.571814       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:51:18.571939       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:51:18.584791       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:51:18.596032       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:18.596120       1 policy_source.go:248] refreshing policies
	I1229 07:51:18.614481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:51:18.615086       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:51:18.638689       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:51:18.967504       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:51:19.378195       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:51:20.914929       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:51:21.120604       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:51:21.262872       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:51:21.305207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:51:21.923829       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:51:22.177052       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:51:22.222752       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d795c528b8a8] <==
	I1229 07:48:52.831664       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.833465       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:48:52.834742       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.845084       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.845307       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.845598       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.845629       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.882411       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:52.882441       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:48:52.882447       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:48:52.926062       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:53.114110       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1229 07:49:03.091420       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:49:26.316677       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m03"
	I1229 07:49:27.535830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m03"
	I1229 07:49:27.540276       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-178114-m02\" does not exist"
	I1229 07:49:27.557172       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-178114-m02" podCIDRs=["10.244.1.0/24"]
	I1229 07:49:41.912607       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:49:42.888527       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:50:05.151400       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:50:06.258031       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-178114-m03\" does not exist"
	I1229 07:50:06.258418       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:50:06.271724       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-178114-m03" podCIDRs=["10.244.2.0/24"]
	I1229 07:50:20.737687       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:50:24.494150       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	
	
	==> kube-controller-manager [df32ecd001ca] <==
	I1229 07:51:21.721987       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.733972       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:51:21.792516       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.800709       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.800729       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.800714       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.803525       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.803625       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.803961       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.805245       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.808494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.808626       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.808893       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:51:21.809412       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.809840       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-178114"
	I1229 07:51:21.810067       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-178114-m02"
	I1229 07:51:21.810342       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.814036       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:51:21.817907       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:21.817935       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:51:21.817941       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:51:21.834868       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:31.543684       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-178114-m02"
	I1229 07:51:56.515776       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-178114-m02\" does not exist"
	I1229 07:51:56.545052       1 range_allocator.go:433] "Set node PodCIDR" node="multinode-178114-m02" podCIDRs=["10.244.1.0/24"]
	
	
	==> kube-proxy [1a991010cd93] <==
	I1229 07:51:20.786703       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:51:20.887401       1 shared_informer.go:377] "Caches are synced"
	I1229 07:51:20.887443       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.92"]
	E1229 07:51:20.887539       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:51:20.960170       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1229 07:51:20.960337       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 07:51:20.960550       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:51:20.971030       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:51:20.972816       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:51:20.974700       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:51:20.979868       1 config.go:200] "Starting service config controller"
	I1229 07:51:20.988261       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:51:20.980079       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:51:20.988362       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:51:20.980097       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:51:20.988373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:51:20.984030       1 config.go:309] "Starting node config controller"
	I1229 07:51:20.988384       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:51:20.988389       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:51:21.088839       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:51:21.088868       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:51:21.088929       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d644dd7bd97e] <==
	I1229 07:48:51.800173       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:48:51.903563       1 shared_informer.go:377] "Caches are synced"
	I1229 07:48:51.903605       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.92"]
	E1229 07:48:51.903700       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:48:52.026885       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1229 07:48:52.026966       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1229 07:48:52.026990       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:48:52.053489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:48:52.054310       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:48:52.054370       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:48:52.060885       1 config.go:200] "Starting service config controller"
	I1229 07:48:52.060902       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:48:52.060922       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:48:52.060926       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:48:52.060939       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:48:52.060943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:48:52.066362       1 config.go:309] "Starting node config controller"
	I1229 07:48:52.066445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:48:52.067080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:48:52.161366       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:48:52.161403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:48:52.161461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [639f398a163a] <==
	I1229 07:48:47.531963       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:48:49.544467       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:48:49.544690       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:48:49.544781       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:48:49.544834       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:48:49.614938       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:48:49.615031       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:48:49.634352       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:48:49.634407       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:48:49.636711       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:48:49.637014       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:48:49.735345       1 shared_informer.go:377] "Caches are synced"
	I1229 07:50:39.420625       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1229 07:50:39.420790       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1229 07:50:39.421003       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1229 07:50:39.421344       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:50:39.421514       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1229 07:50:39.421610       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e228af10eaa8] <==
	I1229 07:51:16.917470       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:51:18.452143       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:51:18.452943       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:51:18.453335       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:51:18.453540       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:51:18.553742       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:51:18.553789       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:51:18.556161       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:51:18.557098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:51:18.557212       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:51:18.557406       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:51:18.658210       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.363981    1924 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-multinode-178114" containerName="kube-scheduler"
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.522069    1924 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.522174    1924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd603e72-7da4-4f75-8c97-de4593e77af5-config-volume podName:dd603e72-7da4-4f75-8c97-de4593e77af5 nodeName:}" failed. No retries permitted until 2025-12-29 07:51:34.522159403 +0000 UTC m=+20.721439726 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dd603e72-7da4-4f75-8c97-de4593e77af5-config-volume") pod "coredns-7d764666f9-gqqbx" (UID: "dd603e72-7da4-4f75-8c97-de4593e77af5") : object "kube-system"/"coredns" not registered
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.623057    1924 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.623115    1924 projected.go:196] Error preparing data for projected volume kube-api-access-4s4kq for pod default/busybox-769dd8b7dd-4dk2b: object "default"/"kube-root-ca.crt" not registered
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.623179    1924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08b149ea-1087-4c64-8763-0415836439b8-kube-api-access-4s4kq podName:08b149ea-1087-4c64-8763-0415836439b8 nodeName:}" failed. No retries permitted until 2025-12-29 07:51:34.623163798 +0000 UTC m=+20.822444110 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4s4kq" (UniqueName: "kubernetes.io/projected/08b149ea-1087-4c64-8763-0415836439b8-kube-api-access-4s4kq") pod "busybox-769dd8b7dd-4dk2b" (UID: "08b149ea-1087-4c64-8763-0415836439b8") : object "default"/"kube-root-ca.crt" not registered
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.989591    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-769dd8b7dd-4dk2b" podUID="08b149ea-1087-4c64-8763-0415836439b8"
	Dec 29 07:51:26 multinode-178114 kubelet[1924]: E1229 07:51:26.990200    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7d764666f9-gqqbx" podUID="dd603e72-7da4-4f75-8c97-de4593e77af5"
	Dec 29 07:51:28 multinode-178114 kubelet[1924]: E1229 07:51:28.989702    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-769dd8b7dd-4dk2b" podUID="08b149ea-1087-4c64-8763-0415836439b8"
	Dec 29 07:51:28 multinode-178114 kubelet[1924]: E1229 07:51:28.990241    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7d764666f9-gqqbx" podUID="dd603e72-7da4-4f75-8c97-de4593e77af5"
	Dec 29 07:51:30 multinode-178114 kubelet[1924]: E1229 07:51:30.107476    1924 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-178114" containerName="etcd"
	Dec 29 07:51:30 multinode-178114 kubelet[1924]: E1229 07:51:30.401362    1924 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-multinode-178114" containerName="etcd"
	Dec 29 07:51:30 multinode-178114 kubelet[1924]: E1229 07:51:30.989443    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-769dd8b7dd-4dk2b" podUID="08b149ea-1087-4c64-8763-0415836439b8"
	Dec 29 07:51:30 multinode-178114 kubelet[1924]: E1229 07:51:30.989526    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7d764666f9-gqqbx" podUID="dd603e72-7da4-4f75-8c97-de4593e77af5"
	Dec 29 07:51:31 multinode-178114 kubelet[1924]: I1229 07:51:31.531479    1924 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:51:32 multinode-178114 kubelet[1924]: E1229 07:51:32.265564    1924 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-multinode-178114" containerName="kube-controller-manager"
	Dec 29 07:51:32 multinode-178114 kubelet[1924]: E1229 07:51:32.847793    1924 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-178114" containerName="kube-apiserver"
	Dec 29 07:51:33 multinode-178114 kubelet[1924]: E1229 07:51:33.429350    1924 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-multinode-178114" containerName="kube-apiserver"
	Dec 29 07:51:36 multinode-178114 kubelet[1924]: E1229 07:51:36.594149    1924 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gqqbx" containerName="coredns"
	Dec 29 07:51:37 multinode-178114 kubelet[1924]: E1229 07:51:37.642062    1924 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gqqbx" containerName="coredns"
	Dec 29 07:51:38 multinode-178114 kubelet[1924]: E1229 07:51:38.653814    1924 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gqqbx" containerName="coredns"
	Dec 29 07:51:39 multinode-178114 kubelet[1924]: E1229 07:51:39.662425    1924 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gqqbx" containerName="coredns"
	Dec 29 07:51:50 multinode-178114 kubelet[1924]: I1229 07:51:50.801214    1924 scope.go:122] "RemoveContainer" containerID="fa2d4896312f08a0158bb82960f215313321c7830d8387552b8f37458e33716e"
	Dec 29 07:51:50 multinode-178114 kubelet[1924]: I1229 07:51:50.802448    1924 scope.go:122] "RemoveContainer" containerID="1dbf6f0ac3081188a1c7f433d3c52bd8a32bd2fd57c4699ddd2a999e9d1eabdb"
	Dec 29 07:51:50 multinode-178114 kubelet[1924]: E1229 07:51:50.804331    1924 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8bdbcf14-1a5b-4150-9adc-728e41f9e652)\"" pod="kube-system/storage-provisioner" podUID="8bdbcf14-1a5b-4150-9adc-728e41f9e652"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-178114 -n multinode-178114
helpers_test.go:270: (dbg) Run:  kubectl --context multinode-178114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-769dd8b7dd-gvwth
helpers_test.go:283: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context multinode-178114 describe pod busybox-769dd8b7dd-gvwth
helpers_test.go:291: (dbg) kubectl --context multinode-178114 describe pod busybox-769dd8b7dd-gvwth:

                                                
                                                
-- stdout --
	Name:             busybox-769dd8b7dd-gvwth
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=769dd8b7dd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-769dd8b7dd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rs2b9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rs2b9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  6s    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. no new claims to deallocate, preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  1s    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  1s    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (65.32s)

                                                
                                    

Test pass (314/370)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.75
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.35.0/json-events 2.86
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.08
18 TestDownloadOnly/v1.35.0/DeleteAll 0.16
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
22 TestOffline 102.25
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 147.6
29 TestAddons/serial/Volcano 44.22
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.57
35 TestAddons/parallel/Registry 16.49
36 TestAddons/parallel/RegistryCreds 0.76
37 TestAddons/parallel/Ingress 22.47
38 TestAddons/parallel/InspektorGadget 12
39 TestAddons/parallel/MetricsServer 6.31
41 TestAddons/parallel/CSI 56.16
42 TestAddons/parallel/Headlamp 21.73
43 TestAddons/parallel/CloudSpanner 5.6
44 TestAddons/parallel/LocalPath 55.03
45 TestAddons/parallel/NvidiaDevicePlugin 6.6
46 TestAddons/parallel/Yakd 11.71
48 TestAddons/StoppedEnableDisable 13.99
49 TestCertOptions 62.39
50 TestCertExpiration 310.13
51 TestDockerFlags 58.91
52 TestForceSystemdFlag 101.35
53 TestForceSystemdEnv 62.2
58 TestErrorSpam/setup 39.15
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.35
62 TestErrorSpam/unpause 1.56
63 TestErrorSpam/stop 5.28
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 75.76
68 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/KubeContext 0.05
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.3
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.03
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
85 TestFunctional/serial/LogsCmd 60.56
86 TestFunctional/serial/LogsFileCmd 60.56
89 TestFunctional/parallel/ConfigCmd 0.42
91 TestFunctional/parallel/DryRun 0.23
92 TestFunctional/parallel/InternationalLanguage 0.12
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.35
102 TestFunctional/parallel/CpCmd 0.95
104 TestFunctional/parallel/FileSync 0.15
105 TestFunctional/parallel/CertSync 0.94
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.19
113 TestFunctional/parallel/License 0.47
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.52
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
124 TestFunctional/parallel/ImageCommands/Setup 1.04
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.9
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.66
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.34
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
134 TestFunctional/parallel/MountCmd/specific-port 1.14
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.09
146 TestFunctional/parallel/ProfileCmd/profile_not_create 15.88
147 TestFunctional/parallel/ProfileCmd/profile_list 16.74
149 TestFunctional/parallel/ProfileCmd/profile_json_output 15.84
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 192.18
161 TestMultiControlPlane/serial/StartCluster 252.63
162 TestMultiControlPlane/serial/DeployApp 6.95
163 TestMultiControlPlane/serial/PingHostFromPods 1.41
164 TestMultiControlPlane/serial/AddWorkerNode 50.13
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
167 TestMultiControlPlane/serial/CopyFile 11.12
168 TestMultiControlPlane/serial/StopSecondaryNode 13.99
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
170 TestMultiControlPlane/serial/RestartSecondaryNode 29.9
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 150.6
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.45
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
175 TestMultiControlPlane/serial/StopCluster 39.33
176 TestMultiControlPlane/serial/RestartCluster 100.98
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
178 TestMultiControlPlane/serial/AddSecondaryNode 84.35
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
182 TestImageBuild/serial/Setup 36.68
183 TestImageBuild/serial/NormalBuild 1.56
184 TestImageBuild/serial/BuildWithBuildArg 1
185 TestImageBuild/serial/BuildWithDockerIgnore 0.82
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1
191 TestJSONOutput/start/Command 77.97
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.6
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.61
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 14.33
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.24
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 78.83
223 TestMountStart/serial/StartWithMountFirst 20.85
224 TestMountStart/serial/VerifyMountFirst 0.3
225 TestMountStart/serial/StartWithMountSecond 20.9
226 TestMountStart/serial/VerifyMountSecond 0.31
227 TestMountStart/serial/DeleteFirst 0.69
228 TestMountStart/serial/VerifyMountPostDelete 0.32
229 TestMountStart/serial/Stop 1.36
230 TestMountStart/serial/RestartStopped 19.52
231 TestMountStart/serial/VerifyMountPostStop 0.31
234 TestMultiNode/serial/FreshStart2Nodes 110.36
235 TestMultiNode/serial/DeployApp2Nodes 4.91
236 TestMultiNode/serial/PingHostFrom2Pods 0.91
237 TestMultiNode/serial/AddNode 47.12
238 TestMultiNode/serial/MultiNodeLabels 0.07
239 TestMultiNode/serial/ProfileList 0.48
240 TestMultiNode/serial/CopyFile 6.06
241 TestMultiNode/serial/StopNode 2.52
242 TestMultiNode/serial/StartAfterStop 43.8
243 TestMultiNode/serial/RestartKeepsNodes 147.79
244 TestMultiNode/serial/DeleteNode 2.26
245 TestMultiNode/serial/StopMultiNode 27.33
247 TestMultiNode/serial/ValidateNameConflict 41.28
254 TestScheduledStopUnix 109.92
255 TestSkaffold 120.08
258 TestRunningBinaryUpgrade 496
260 TestKubernetesUpgrade 164.51
265 TestPreload/Start-NoPreload-PullImage 150.41
282 TestPause/serial/Start 83.67
283 TestPause/serial/SecondStartNoReconfiguration 58.55
284 TestISOImage/Setup 24.22
286 TestISOImage/Binaries/crictl 0.21
287 TestISOImage/Binaries/curl 0.18
288 TestISOImage/Binaries/docker 0.19
289 TestISOImage/Binaries/git 0.32
290 TestISOImage/Binaries/iptables 0.17
291 TestISOImage/Binaries/podman 0.32
292 TestISOImage/Binaries/rsync 0.22
293 TestISOImage/Binaries/socat 0.36
294 TestISOImage/Binaries/wget 0.41
295 TestISOImage/Binaries/VBoxControl 0.2
296 TestISOImage/Binaries/VBoxService 0.2
297 TestPause/serial/Pause 0.6
298 TestPause/serial/VerifyStatus 0.24
299 TestPause/serial/Unpause 0.58
300 TestPause/serial/PauseAgain 0.77
301 TestPause/serial/DeletePaused 0.85
302 TestPause/serial/VerifyDeletedResources 6.64
303 TestPreload/Restart-With-Preload-Check-User-Image 64.65
305 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
306 TestNoKubernetes/serial/StartWithK8s 76.89
308 TestNoKubernetes/serial/StartWithStopK8s 37.98
309 TestNoKubernetes/serial/Start 28
310 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
311 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
312 TestNoKubernetes/serial/ProfileList 1.77
313 TestNoKubernetes/serial/Stop 1.46
314 TestStoppedBinaryUpgrade/Setup 0.63
315 TestStoppedBinaryUpgrade/Upgrade 129.64
316 TestNoKubernetes/serial/StartNoArgs 33.89
317 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
318 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
320 TestStartStop/group/old-k8s-version/serial/FirstStart 97.16
322 TestStartStop/group/no-preload/serial/FirstStart 70.41
324 TestStartStop/group/embed-certs/serial/FirstStart 103.43
325 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
326 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.33
327 TestStartStop/group/old-k8s-version/serial/Stop 13.35
328 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
329 TestStartStop/group/old-k8s-version/serial/SecondStart 48.21
330 TestStartStop/group/no-preload/serial/DeployApp 9.35
331 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
332 TestStartStop/group/no-preload/serial/Stop 13.61
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
334 TestStartStop/group/no-preload/serial/SecondStart 54.31
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
336 TestStartStop/group/embed-certs/serial/DeployApp 10.36
337 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
338 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
339 TestStartStop/group/embed-certs/serial/Stop 13.72
340 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.18
341 TestStartStop/group/old-k8s-version/serial/Pause 2.99
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.93
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
345 TestStartStop/group/embed-certs/serial/SecondStart 62.32
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
349 TestStartStop/group/no-preload/serial/Pause 3.39
351 TestStartStop/group/newest-cni/serial/FirstStart 49
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
355 TestStartStop/group/embed-certs/serial/Pause 2.92
356 TestNetworkPlugins/group/auto/Start 83.53
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.45
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.73
360 TestStartStop/group/newest-cni/serial/DeployApp 0
361 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
362 TestStartStop/group/newest-cni/serial/Stop 6.47
363 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
364 TestStartStop/group/newest-cni/serial/SecondStart 33.87
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.42
367 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
370 TestStartStop/group/newest-cni/serial/Pause 2.65
371 TestNetworkPlugins/group/kindnet/Start 69.28
372 TestNetworkPlugins/group/auto/KubeletFlags 0.19
373 TestNetworkPlugins/group/auto/NetCatPod 11.32
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.32
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
376 TestNetworkPlugins/group/auto/DNS 0.19
377 TestNetworkPlugins/group/auto/Localhost 0.17
378 TestNetworkPlugins/group/auto/HairPin 0.16
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.96
381 TestNetworkPlugins/group/calico/Start 100.18
382 TestNetworkPlugins/group/custom-flannel/Start 79.61
383 TestNetworkPlugins/group/false/Start 104.78
384 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
385 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
386 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
387 TestNetworkPlugins/group/kindnet/DNS 0.21
388 TestNetworkPlugins/group/kindnet/Localhost 0.18
389 TestNetworkPlugins/group/kindnet/HairPin 0.19
390 TestNetworkPlugins/group/enable-default-cni/Start 96.15
391 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
392 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
393 TestNetworkPlugins/group/calico/ControllerPod 6.01
394 TestNetworkPlugins/group/custom-flannel/DNS 0.19
395 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
396 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
397 TestNetworkPlugins/group/calico/KubeletFlags 0.19
398 TestNetworkPlugins/group/calico/NetCatPod 12.31
399 TestNetworkPlugins/group/flannel/Start 62.73
400 TestNetworkPlugins/group/calico/DNS 0.19
401 TestNetworkPlugins/group/calico/Localhost 0.2
402 TestNetworkPlugins/group/calico/HairPin 0.19
403 TestNetworkPlugins/group/false/KubeletFlags 0.18
404 TestNetworkPlugins/group/false/NetCatPod 12.28
405 TestNetworkPlugins/group/bridge/Start 89.5
406 TestNetworkPlugins/group/false/DNS 0.18
407 TestNetworkPlugins/group/false/Localhost 0.14
408 TestNetworkPlugins/group/false/HairPin 0.15
409 TestNetworkPlugins/group/kubenet/Start 85.39
410 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
411 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
412 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
413 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
414 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
415 TestNetworkPlugins/group/flannel/ControllerPod 6.01
416 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
417 TestNetworkPlugins/group/flannel/NetCatPod 12.33
418 TestPreload/PreloadSrc/gcs 3.62
419 TestPreload/PreloadSrc/github 17.1
420 TestNetworkPlugins/group/flannel/DNS 0.17
421 TestNetworkPlugins/group/flannel/Localhost 0.15
422 TestNetworkPlugins/group/flannel/HairPin 0.15
423 TestPreload/PreloadSrc/gcs-cached 0.27
425 TestISOImage/PersistentMounts//data 0.17
426 TestISOImage/PersistentMounts//var/lib/docker 0.17
427 TestISOImage/PersistentMounts//var/lib/cni 0.17
428 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
429 TestISOImage/PersistentMounts//var/lib/minikube 0.18
430 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
431 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
432 TestISOImage/VersionJSON 0.17
433 TestISOImage/eBPFSupport 0.16
434 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
435 TestNetworkPlugins/group/bridge/NetCatPod 11.24
436 TestNetworkPlugins/group/bridge/DNS 0.17
437 TestNetworkPlugins/group/bridge/Localhost 0.19
438 TestNetworkPlugins/group/bridge/HairPin 0.14
439 TestNetworkPlugins/group/kubenet/KubeletFlags 0.18
440 TestNetworkPlugins/group/kubenet/NetCatPod 12.28
441 TestNetworkPlugins/group/kubenet/DNS 0.17
442 TestNetworkPlugins/group/kubenet/Localhost 0.14
443 TestNetworkPlugins/group/kubenet/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-009618 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-009618 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (7.751808816s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1229 06:46:10.362224   13486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1229 06:46:10.362313   13486 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-009618
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-009618: exit status 85 (76.033429ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-009618 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-009618 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:02.665684   13498 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:02.665953   13498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:02.665963   13498 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:02.665967   13498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:02.666178   13498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	W1229 06:46:02.666380   13498 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22353-9552/.minikube/config/config.json: open /home/jenkins/minikube-integration/22353-9552/.minikube/config/config.json: no such file or directory
	I1229 06:46:02.666949   13498 out.go:368] Setting JSON to true
	I1229 06:46:02.667936   13498 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1713,"bootTime":1766989050,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:46:02.668011   13498 start.go:143] virtualization: kvm guest
	I1229 06:46:02.673479   13498 out.go:99] [download-only-009618] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:46:02.673766   13498 notify.go:221] Checking for updates...
	W1229 06:46:02.673774   13498 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball: no such file or directory
	I1229 06:46:02.675441   13498 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:02.677047   13498 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:02.678664   13498 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:46:02.680254   13498 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:46:02.681693   13498 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1229 06:46:02.684494   13498 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:46:02.684836   13498 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:03.236822   13498 out.go:99] Using the kvm2 driver based on user configuration
	I1229 06:46:03.236867   13498 start.go:309] selected driver: kvm2
	I1229 06:46:03.236875   13498 start.go:928] validating driver "kvm2" against <nil>
	I1229 06:46:03.237211   13498 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:03.237707   13498 start_flags.go:417] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1229 06:46:03.237870   13498 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:46:03.237899   13498 cni.go:84] Creating CNI manager for ""
	I1229 06:46:03.237951   13498 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:46:03.237961   13498 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1229 06:46:03.238017   13498 start.go:353] cluster config:
	{Name:download-only-009618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-009618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:46:03.238188   13498 iso.go:125] acquiring lock: {Name:mk2adf09d18eb25f1d98559b1ab4af84fc4e9a54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:46:03.239965   13498 out.go:99] Downloading VM boot image ...
	I1229 06:46:03.240027   13498 download.go:114] Downloading: https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22353-9552/.minikube/cache/iso/amd64/minikube-v1.37.0-1766979747-22353-amd64.iso
	I1229 06:46:06.295365   13498 out.go:99] Starting "download-only-009618" primary control-plane node in "download-only-009618" cluster
	I1229 06:46:06.295406   13498 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1229 06:46:06.311109   13498 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1229 06:46:06.311150   13498 cache.go:65] Caching tarball of preloaded images
	I1229 06:46:06.311335   13498 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1229 06:46:06.313314   13498 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1229 06:46:06.313348   13498 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1229 06:46:06.313356   13498 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1229 06:46:06.332105   13498 preload.go:313] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1229 06:46:06.332247   13498 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-009618 host does not exist
	  To start a cluster, run: "minikube start -p download-only-009618"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-009618
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-519230 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-519230 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=kvm2 : (2.856790322s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1229 06:46:13.605657   13486 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 06:46:13.605692   13486 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-519230
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-519230: exit status 85 (77.02541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-009618 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-009618 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-009618                                                                                                                         │ download-only-009618 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ -o=json --download-only -p download-only-519230 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=kvm2 │ download-only-519230 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:10
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:10.800878   13707 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:10.801004   13707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:10.801013   13707 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:10.801017   13707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:10.801217   13707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 06:46:10.801660   13707 out.go:368] Setting JSON to true
	I1229 06:46:10.802473   13707 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1721,"bootTime":1766989050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:46:10.802530   13707 start.go:143] virtualization: kvm guest
	I1229 06:46:10.804843   13707 out.go:99] [download-only-519230] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:46:10.804994   13707 notify.go:221] Checking for updates...
	I1229 06:46:10.806380   13707 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:10.808412   13707 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:10.809716   13707 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 06:46:10.811247   13707 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 06:46:10.813072   13707 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-519230 host does not exist
	  To start a cluster, run: "minikube start -p download-only-519230"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-519230
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1229 06:46:14.304616   13486 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-004244 --alsologtostderr --binary-mirror http://127.0.0.1:42577 --driver=kvm2 
helpers_test.go:176: Cleaning up "binary-mirror-004244" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-004244
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (102.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-209360 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-209360 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m41.355319916s)
helpers_test.go:176: Cleaning up "offline-docker-209360" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-209360
--- PASS: TestOffline (102.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-909246
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-909246: exit status 85 (66.821949ms)

                                                
                                                
-- stdout --
	* Profile "addons-909246" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-909246"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-909246
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-909246: exit status 85 (67.618274ms)

                                                
                                                
-- stdout --
	* Profile "addons-909246" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-909246"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (147.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-909246 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-909246 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m27.601109672s)
--- PASS: TestAddons/Setup (147.60s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.22s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 26.696525ms
addons_test.go:886: volcano-controller stabilized in 28.088638ms
addons_test.go:878: volcano-admission stabilized in 28.628288ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-w58vz" [6fc72600-438f-4e85-b2c1-f7b979ead91e] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004889197s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-vq9mz" [9481bc37-3891-4577-98fb-d880af8845a0] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004949991s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-hgnrk" [c0051123-2784-42c0-90b1-4e8d437343ac] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004243526s
addons_test.go:905: (dbg) Run:  kubectl --context addons-909246 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-909246 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-909246 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [39648ff4-ca18-49b6-99ef-df855856dc7f] Pending
helpers_test.go:353: "test-job-nginx-0" [39648ff4-ca18-49b6-99ef-df855856dc7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [39648ff4-ca18-49b6-99ef-df855856dc7f] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004766998s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable volcano --alsologtostderr -v=1: (11.744937932s)
--- PASS: TestAddons/serial/Volcano (44.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-909246 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-909246 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.57s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-909246 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-909246 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7237914a-1e0a-466e-abb5-f8bbcb2c7656] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7237914a-1e0a-466e-abb5-f8bbcb2c7656] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004150955s
addons_test.go:696: (dbg) Run:  kubectl --context addons-909246 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-909246 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-909246 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 12.176983ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-t6tdv" [756f97a0-7a0f-4b0a-adda-ed36f60f8eb9] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008558507s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-nrczl" [fa97b36e-5620-4a09-86b8-8714508f6503] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004396239s
addons_test.go:394: (dbg) Run:  kubectl --context addons-909246 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-909246 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-909246 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.577355159s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 ip
2025/12/29 06:50:00 [DEBUG] GET http://192.168.39.6:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.49s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.44813ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-909246
addons_test.go:334: (dbg) Run:  kubectl --context addons-909246 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-909246 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-909246 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-909246 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [e9f8ddc9-2ea4-44a3-b719-baed6070f955] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [e9f8ddc9-2ea4-44a3-b719-baed6070f955] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.013230607s
I1229 06:50:04.117647   13486 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-909246 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.6
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable ingress-dns --alsologtostderr -v=1: (1.111008812s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable ingress --alsologtostderr -v=1: (8.004598874s)
--- PASS: TestAddons/parallel/Ingress (22.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-9cw9d" [bafb68d3-53a6-4dc9-8661-7f0bc6a0f472] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004066238s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable inspektor-gadget --alsologtostderr -v=1: (5.992861073s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 11.832312ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-hxfvv" [5351a4c1-0d6a-4661-b1af-ce4f09b93f41] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00895566s
addons_test.go:465: (dbg) Run:  kubectl --context addons-909246 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable metrics-server --alsologtostderr -v=1: (1.197385331s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1229 06:50:06.660669   13486 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1229 06:50:06.668747   13486 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1229 06:50:06.668781   13486 kapi.go:107] duration metric: took 8.114637ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.12691ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-909246 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-909246 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [b2a77f0e-14a8-455c-ab28-a50153114943] Pending
helpers_test.go:353: "task-pv-pod" [b2a77f0e-14a8-455c-ab28-a50153114943] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [b2a77f0e-14a8-455c-ab28-a50153114943] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004887678s
addons_test.go:574: (dbg) Run:  kubectl --context addons-909246 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-909246 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-909246 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-909246 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-909246 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-909246 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-909246 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [334eb8ae-62fb-47fe-9958-8f433559d0cb] Pending
helpers_test.go:353: "task-pv-pod-restore" [334eb8ae-62fb-47fe-9958-8f433559d0cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [334eb8ae-62fb-47fe-9958-8f433559d0cb] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005900777s
addons_test.go:616: (dbg) Run:  kubectl --context addons-909246 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-909246 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-909246 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.974037267s)
--- PASS: TestAddons/parallel/CSI (56.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-909246 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-9l42f" [d22f7686-c4a4-4cb5-aac2-62855958b1a7] Pending
helpers_test.go:353: "headlamp-6d8d595f-9l42f" [d22f7686-c4a4-4cb5-aac2-62855958b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-9l42f" [d22f7686-c4a4-4cb5-aac2-62855958b1a7] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004331343s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable headlamp --alsologtostderr -v=1: (5.930267132s)
--- PASS: TestAddons/parallel/Headlamp (21.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-fdzxp" [22405756-9f30-48b6-8d4e-ba71da246705] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003399468s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-909246 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-909246 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [37a47986-4823-4f22-918b-d1920e7d1c85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [37a47986-4823-4f22-918b-d1920e7d1c85] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [37a47986-4823-4f22-918b-d1920e7d1c85] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00467899s
addons_test.go:969: (dbg) Run:  kubectl --context addons-909246 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 ssh "cat /opt/local-path-provisioner/pvc-60e48b23-4f43-4f44-8576-c979927d0800_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-909246 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-909246 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.227315893s)
--- PASS: TestAddons/parallel/LocalPath (55.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-ltcw9" [e9dce7a5-6c5c-45b0-9c9c-8d4034a1813e] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.06600394s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-d4fnj" [0e1e02ff-871c-429a-9778-cff1eb402d8f] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004628357s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-909246 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-909246 addons disable yakd --alsologtostderr -v=1: (5.707803233s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-909246
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-909246: (13.779833577s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-909246
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-909246
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-909246
--- PASS: TestAddons/StoppedEnableDisable (13.99s)

                                                
                                    
x
+
TestCertOptions (62.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-329549 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-329549 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m1.137568072s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-329549 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-329549 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-329549 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-329549" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-329549
--- PASS: TestCertOptions (62.39s)

                                                
                                    
x
+
TestCertExpiration (310.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-890828 --memory=3072 --cert-expiration=3m --driver=kvm2 
E1229 08:00:06.161243   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-890828 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m2.800546038s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-890828 --memory=3072 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-890828 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m6.467871775s)
helpers_test.go:176: Cleaning up "cert-expiration-890828" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-890828
--- PASS: TestCertExpiration (310.13s)

                                                
                                    
x
+
TestDockerFlags (58.91s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-904977 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-904977 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (57.540206459s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-904977 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-904977 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-904977" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-904977
--- PASS: TestDockerFlags (58.91s)

                                                
                                    
x
+
TestForceSystemdFlag (101.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-045304 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-045304 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m40.223176095s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-045304 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-045304" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-045304
--- PASS: TestForceSystemdFlag (101.35s)

                                                
                                    
x
+
TestForceSystemdEnv (62.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-231188 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E1229 07:58:43.091332   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-231188 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m1.008931101s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-231188 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-231188" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-231188
--- PASS: TestForceSystemdEnv (62.20s)

                                                
                                    
x
+
TestErrorSpam/setup (39.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-039815 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-039815 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-039815 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-039815 --driver=kvm2 : (39.144941487s)
--- PASS: TestErrorSpam/setup (39.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (5.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 stop: (3.392961548s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 stop: (1.050316896s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-039815 --log_dir /tmp/nospam-039815 stop
--- PASS: TestErrorSpam/stop (5.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22353-9552/.minikube/files/etc/test/nested/copy/13486/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-695625 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m15.76435775s)
--- PASS: TestFunctional/serial/StartWithProxy (75.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-695625 /tmp/TestFunctionalserialCacheCmdcacheadd_local4026313029/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cache add minikube-local-cache-test:functional-695625
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cache delete minikube-local-cache-test:functional-695625
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-695625
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (169.087299ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (60.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs: (1m0.561173887s)
--- PASS: TestFunctional/serial/LogsCmd (60.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (60.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 logs --file /tmp/TestFunctionalserialLogsFileCmd2404409027/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 logs --file /tmp/TestFunctionalserialLogsFileCmd2404409027/001/logs.txt: (1m0.55492886s)
--- PASS: TestFunctional/serial/LogsFileCmd (60.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 config get cpus: exit status 14 (66.036819ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 config get cpus: exit status 14 (69.4516ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (117.747343ms)

                                                
                                                
-- stdout --
	* [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:21:52.835626   25191 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:21:52.835746   25191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.835755   25191 out.go:374] Setting ErrFile to fd 2...
	I1229 07:21:52.835759   25191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:21:52.835988   25191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:21:52.836475   25191 out.go:368] Setting JSON to false
	I1229 07:21:52.837373   25191 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3863,"bootTime":1766989050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:21:52.837434   25191 start.go:143] virtualization: kvm guest
	I1229 07:21:52.839585   25191 out.go:179] * [functional-695625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:21:52.841206   25191 notify.go:221] Checking for updates...
	I1229 07:21:52.841225   25191 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:21:52.843042   25191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:21:52.844507   25191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:21:52.845957   25191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:21:52.847211   25191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:21:52.848489   25191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:21:52.850429   25191 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:21:52.851078   25191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:21:52.884011   25191 out.go:179] * Using the kvm2 driver based on existing profile
	I1229 07:21:52.885395   25191 start.go:309] selected driver: kvm2
	I1229 07:21:52.885414   25191 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:21:52.885545   25191 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:21:52.888389   25191 out.go:203] 
	W1229 07:21:52.889920   25191 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1229 07:21:52.892331   25191 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695625 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695625 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (116.433593ms)

                                                
                                                
-- stdout --
	* [functional-695625] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:24:12.577834   25866 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:24:12.578108   25866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:12.578120   25866 out.go:374] Setting ErrFile to fd 2...
	I1229 07:24:12.578124   25866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:12.578391   25866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:24:12.578874   25866 out.go:368] Setting JSON to false
	I1229 07:24:12.579759   25866 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4003,"bootTime":1766989050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:24:12.579858   25866 start.go:143] virtualization: kvm guest
	I1229 07:24:12.582170   25866 out.go:179] * [functional-695625] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1229 07:24:12.583620   25866 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:24:12.583612   25866 notify.go:221] Checking for updates...
	I1229 07:24:12.586166   25866 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:24:12.587629   25866 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	I1229 07:24:12.589023   25866 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	I1229 07:24:12.590535   25866 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:24:12.591900   25866 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:24:12.593453   25866 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:12.593981   25866 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:24:12.625888   25866 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1229 07:24:12.627045   25866 start.go:309] selected driver: kvm2
	I1229 07:24:12.627069   25866 start.go:928] validating driver "kvm2" against &{Name:functional-695625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-695625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:12.627185   25866 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:24:12.629157   25866 out.go:203] 
	W1229 07:24:12.630360   25866 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1229 07:24:12.631483   25866 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh -n functional-695625 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cp functional-695625:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3240797659/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh -n functional-695625 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh -n functional-695625 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/13486/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /etc/test/nested/copy/13486/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/13486.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /etc/ssl/certs/13486.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/13486.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /usr/share/ca-certificates/13486.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/134862.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /etc/ssl/certs/134862.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/134862.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /usr/share/ca-certificates/134862.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh "sudo systemctl is-active crio": exit status 1 (190.699088ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695625 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-695625
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695625 image ls --format short --alsologtostderr:
I1229 07:24:20.429843   26022 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:20.429949   26022 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:20.429961   26022 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:20.429968   26022 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:20.430151   26022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:24:20.430699   26022 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:20.430814   26022 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:20.433103   26022 ssh_runner.go:195] Run: systemctl --version
I1229 07:24:20.435409   26022 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:20.435989   26022 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 08:22:22 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:24:20.436020   26022 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:20.436210   26022 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:24:20.514066   26022 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695625 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                                │ functional-695625 │ 2c94cdeea7926 │ 1.24MB │
│ docker.io/library/minikube-local-cache-test       │ functional-695625 │ 35831ac6fe85b │ 30B    │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ 550794e3b12ac │ 51.7MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ 32652ff1bbe6b │ 70.7MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 2c9a4b058bd7e │ 75.8MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-695625 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ 5c6acd67e9cd1 │ 89.8MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                             │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                             │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                             │ latest            │ 350b164e7ae1d │ 240kB  │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695625 image ls --format table --alsologtostderr:
I1229 07:24:24.454785   26143 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:24.454945   26143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:24.454960   26143 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:24.454967   26143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:24.455179   26143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:24:24.455737   26143 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:24.455851   26143 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:24.457942   26143 ssh_runner.go:195] Run: systemctl --version
I1229 07:24:24.460172   26143 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:24.460559   26143 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 08:22:22 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:24:24.460585   26143 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:24.460720   26143 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:24:24.539110   26143 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695625 image ls --format json --alsologtostderr:
[{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"51700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2c94cdeea792667b44e2e8ebc001019ba0930fbf53e42dc5fe1a7bce8d42e14d","repoDigests":[],"repoTags":["localhost/my-image:functional-695625"],"size":"1240000"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb6
48f499","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"89800000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"35831ac6fe85b3b11a163e506b60cc3529af27d4b07afc86c52c648f145343bf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-695625"],"size":"30"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":[],"repoTags"
:["registry.k8s.io/kube-proxy:v1.35.0"],"size":"70700000"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"75800000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695625 image ls --format json --alsologtostderr:
I1229 07:24:24.283663   26132 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:24.283933   26132 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:24.283943   26132 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:24.283947   26132 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:24.284159   26132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:24:24.284742   26132 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:24.284865   26132 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:24.287986   26132 ssh_runner.go:195] Run: systemctl --version
I1229 07:24:24.290515   26132 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:24.290948   26132 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 08:22:22 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:24:24.290988   26132 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:24.291130   26132 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:24:24.368914   26132 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695625 image ls --format yaml --alsologtostderr:
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "70700000"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "51700000"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "75800000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 35831ac6fe85b3b11a163e506b60cc3529af27d4b07afc86c52c648f145343bf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-695625
size: "30"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "89800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695625 image ls --format yaml --alsologtostderr:
I1229 07:24:20.601101   26033 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:20.601347   26033 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:20.601357   26033 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:20.601364   26033 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:20.601556   26033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:24:20.602137   26033 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:20.602263   26033 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:20.604496   26033 ssh_runner.go:195] Run: systemctl --version
I1229 07:24:20.606783   26033 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:20.607301   26033 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 08:22:22 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:24:20.607333   26033 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:20.607509   26033 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:24:20.684782   26033 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh pgrep buildkitd: exit status 1 (148.423542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image build -t localhost/my-image:functional-695625 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-695625 image build -t localhost/my-image:functional-695625 testdata/build --alsologtostderr: (3.193324334s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695625 image build -t localhost/my-image:functional-695625 testdata/build --alsologtostderr:
I1229 07:24:20.912944   26055 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:20.913245   26055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:20.913256   26055 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:20.913260   26055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:20.913872   26055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
I1229 07:24:20.915089   26055 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:20.915806   26055 config.go:182] Loaded profile config "functional-695625": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:20.917897   26055 ssh_runner.go:195] Run: systemctl --version
I1229 07:24:20.920024   26055 main.go:144] libmachine: domain functional-695625 has defined MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:20.920585   26055 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:32:c1", ip: ""} in network mk-functional-695625: {Iface:virbr1 ExpiryTime:2025-12-29 08:22:22 +0000 UTC Type:0 Mac:52:54:00:66:32:c1 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-695625 Clientid:01:52:54:00:66:32:c1}
I1229 07:24:20.920618   26055 main.go:144] libmachine: domain functional-695625 has defined IP address 192.168.39.121 and MAC address 52:54:00:66:32:c1 in network mk-functional-695625
I1229 07:24:20.920809   26055 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/functional-695625/id_rsa Username:docker}
I1229 07:24:20.997709   26055 build_images.go:162] Building image from path: /tmp/build.2782276527.tar
I1229 07:24:20.997785   26055 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1229 07:24:21.010872   26055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2782276527.tar
I1229 07:24:21.016041   26055 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2782276527.tar: stat -c "%s %y" /var/lib/minikube/build/build.2782276527.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2782276527.tar': No such file or directory
I1229 07:24:21.016084   26055 ssh_runner.go:362] scp /tmp/build.2782276527.tar --> /var/lib/minikube/build/build.2782276527.tar (3072 bytes)
I1229 07:24:21.052167   26055 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2782276527
I1229 07:24:21.065207   26055 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2782276527 -xf /var/lib/minikube/build/build.2782276527.tar
I1229 07:24:21.077694   26055 docker.go:364] Building image: /var/lib/minikube/build/build.2782276527
I1229 07:24:21.077769   26055 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-695625 /var/lib/minikube/build/build.2782276527
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:2c94cdeea792667b44e2e8ebc001019ba0930fbf53e42dc5fe1a7bce8d42e14d
#8 writing image sha256:2c94cdeea792667b44e2e8ebc001019ba0930fbf53e42dc5fe1a7bce8d42e14d done
#8 naming to localhost/my-image:functional-695625 done
#8 DONE 0.1s
I1229 07:24:24.015385   26055 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-695625 /var/lib/minikube/build/build.2782276527: (2.937593481s)
I1229 07:24:24.015472   26055 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2782276527
I1229 07:24:24.030098   26055 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2782276527.tar
I1229 07:24:24.047317   26055 build_images.go:218] Built localhost/my-image:functional-695625 from /tmp/build.2782276527.tar
I1229 07:24:24.047357   26055 build_images.go:134] succeeded building to: functional-695625
I1229 07:24:24.047362   26055 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (1.018850483s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (149.392311ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 07:19:06.040853   13486 retry.go:84] will retry after 300ms: exit status 1 (duplicate log for 35.5s)
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh "sudo umount -f /mount-9p": exit status 1 (150.402033ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-695625 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdspecific-port525394001/001:/mount-9p --alsologtostderr -v=1 --port 33243] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T" /mount1: exit status 1 (166.418857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-695625 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-695625 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695625 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3234026895/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (15.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
functional_test.go:1295: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.798292463s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (15.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (16.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: (dbg) Done: out/minikube-linux-amd64 profile list: (16.673229379s)
functional_test.go:1335: Took "16.67333361s" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "61.835072ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (16.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (15.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: (dbg) Done: out/minikube-linux-amd64 profile list -o json: (15.779780607s)
functional_test.go:1386: Took "15.779873782s" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "59.729267ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (15.84s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-695625
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-695625
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-695625
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (192.18s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-593112 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-593112 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (59.192396574s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-593112 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-593112 cache add gcr.io/k8s-minikube/gvisor-addon:2: (3.699052121s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-593112 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-593112 addons enable gvisor: (6.58722109s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [6348588f-92c5-404d-b493-c452e6e8eea7] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00401996s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-593112 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [0646d547-d6e7-4ace-847e-3279e615c8a8] Pending
helpers_test.go:353: "nginx-gvisor" [0646d547-d6e7-4ace-847e-3279e615c8a8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-gvisor" [0646d547-d6e7-4ace-847e-3279e615c8a8] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 51.004826019s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-593112
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-593112: (11.13758608s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-593112 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-593112 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (42.444412129s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [6348588f-92c5-404d-b493-c452e6e8eea7] Running
E1229 08:01:20.747257   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:20.752611   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:20.762947   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:20.783337   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:20.823691   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:20.904047   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:21.064620   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:21.385258   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:22.026387   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:23.306963   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003919776s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [0646d547-d6e7-4ace-847e-3279e615c8a8] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1229 08:01:25.867199   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004725536s
helpers_test.go:176: Cleaning up "gvisor-593112" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-593112
--- PASS: TestGvisorAddon (192.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (252.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1229 07:28:43.095082   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.200585   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.205899   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.216282   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.236636   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.276975   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.357353   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.517838   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:08.838265   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:09.478906   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:10.759596   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:13.320376   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:18.440709   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:28.681281   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:49.161569   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:30:30.121951   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (4m11.958073764s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (252.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 kubectl -- rollout status deployment/busybox: (4.141986689s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-49wf5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-qhzz4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-xkxf5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-49wf5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-qhzz4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-xkxf5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-49wf5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-qhzz4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-xkxf5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-49wf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-49wf5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-qhzz4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-qhzz4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-xkxf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 kubectl -- exec busybox-769dd8b7dd-xkxf5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node add --alsologtostderr -v 5
E1229 07:31:52.042452   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 node add --alsologtostderr -v 5: (49.379304831s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-181620 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp testdata/cp-test.txt ha-181620:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4279307043/001/cp-test_ha-181620.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620:/home/docker/cp-test.txt ha-181620-m02:/home/docker/cp-test_ha-181620_ha-181620-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test_ha-181620_ha-181620-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620:/home/docker/cp-test.txt ha-181620-m03:/home/docker/cp-test_ha-181620_ha-181620-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test_ha-181620_ha-181620-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620:/home/docker/cp-test.txt ha-181620-m04:/home/docker/cp-test_ha-181620_ha-181620-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test_ha-181620_ha-181620-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp testdata/cp-test.txt ha-181620-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4279307043/001/cp-test_ha-181620-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m02:/home/docker/cp-test.txt ha-181620:/home/docker/cp-test_ha-181620-m02_ha-181620.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test_ha-181620-m02_ha-181620.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m02:/home/docker/cp-test.txt ha-181620-m03:/home/docker/cp-test_ha-181620-m02_ha-181620-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test_ha-181620-m02_ha-181620-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m02:/home/docker/cp-test.txt ha-181620-m04:/home/docker/cp-test_ha-181620-m02_ha-181620-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test_ha-181620-m02_ha-181620-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp testdata/cp-test.txt ha-181620-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4279307043/001/cp-test_ha-181620-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m03:/home/docker/cp-test.txt ha-181620:/home/docker/cp-test_ha-181620-m03_ha-181620.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test_ha-181620-m03_ha-181620.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m03:/home/docker/cp-test.txt ha-181620-m02:/home/docker/cp-test_ha-181620-m03_ha-181620-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test_ha-181620-m03_ha-181620-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m03:/home/docker/cp-test.txt ha-181620-m04:/home/docker/cp-test_ha-181620-m03_ha-181620-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test_ha-181620-m03_ha-181620-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp testdata/cp-test.txt ha-181620-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4279307043/001/cp-test_ha-181620-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m04:/home/docker/cp-test.txt ha-181620:/home/docker/cp-test_ha-181620-m04_ha-181620.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620 "sudo cat /home/docker/cp-test_ha-181620-m04_ha-181620.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m04:/home/docker/cp-test.txt ha-181620-m02:/home/docker/cp-test_ha-181620-m04_ha-181620-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m02 "sudo cat /home/docker/cp-test_ha-181620-m04_ha-181620-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 cp ha-181620-m04:/home/docker/cp-test.txt ha-181620-m03:/home/docker/cp-test_ha-181620-m04_ha-181620-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 ssh -n ha-181620-m03 "sudo cat /home/docker/cp-test_ha-181620-m04_ha-181620-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 node stop m02 --alsologtostderr -v 5: (13.418978252s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5: exit status 7 (566.651206ms)

                                                
                                                
-- stdout --
	ha-181620
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181620-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181620-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181620-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:32:39.472675   29689 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:32:39.472934   29689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:32:39.472943   29689 out.go:374] Setting ErrFile to fd 2...
	I1229 07:32:39.472946   29689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:32:39.473127   29689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:32:39.473296   29689 out.go:368] Setting JSON to false
	I1229 07:32:39.473320   29689 mustload.go:66] Loading cluster: ha-181620
	I1229 07:32:39.473493   29689 notify.go:221] Checking for updates...
	I1229 07:32:39.473648   29689 config.go:182] Loaded profile config "ha-181620": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:32:39.473660   29689 status.go:174] checking status of ha-181620 ...
	I1229 07:32:39.476226   29689 status.go:371] ha-181620 host status = "Running" (err=<nil>)
	I1229 07:32:39.476245   29689 host.go:66] Checking if "ha-181620" exists ...
	I1229 07:32:39.479622   29689 main.go:144] libmachine: domain ha-181620 has defined MAC address 52:54:00:4f:eb:2f in network mk-ha-181620
	I1229 07:32:39.480267   29689 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:eb:2f", ip: ""} in network mk-ha-181620: {Iface:virbr1 ExpiryTime:2025-12-29 08:27:17 +0000 UTC Type:0 Mac:52:54:00:4f:eb:2f Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-181620 Clientid:01:52:54:00:4f:eb:2f}
	I1229 07:32:39.480311   29689 main.go:144] libmachine: domain ha-181620 has defined IP address 192.168.39.163 and MAC address 52:54:00:4f:eb:2f in network mk-ha-181620
	I1229 07:32:39.480486   29689 host.go:66] Checking if "ha-181620" exists ...
	I1229 07:32:39.480831   29689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:32:39.483668   29689 main.go:144] libmachine: domain ha-181620 has defined MAC address 52:54:00:4f:eb:2f in network mk-ha-181620
	I1229 07:32:39.484260   29689 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:eb:2f", ip: ""} in network mk-ha-181620: {Iface:virbr1 ExpiryTime:2025-12-29 08:27:17 +0000 UTC Type:0 Mac:52:54:00:4f:eb:2f Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-181620 Clientid:01:52:54:00:4f:eb:2f}
	I1229 07:32:39.484293   29689 main.go:144] libmachine: domain ha-181620 has defined IP address 192.168.39.163 and MAC address 52:54:00:4f:eb:2f in network mk-ha-181620
	I1229 07:32:39.484472   29689 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/ha-181620/id_rsa Username:docker}
	I1229 07:32:39.584413   29689 ssh_runner.go:195] Run: systemctl --version
	I1229 07:32:39.591861   29689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:32:39.611342   29689 kubeconfig.go:125] found "ha-181620" server: "https://192.168.39.254:8443"
	I1229 07:32:39.611375   29689 api_server.go:166] Checking apiserver status ...
	I1229 07:32:39.611413   29689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:32:39.636057   29689 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2592/cgroup
	I1229 07:32:39.648838   29689 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2592/cgroup
	I1229 07:32:39.663563   29689 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f8aa7db9b4da56f8cb81d76ccf0c01.slice/docker-f35c61b21b3a368500a194e2ad9842d775e83ebdd99fc64e87ae389f9cbfd176.scope/cgroup.freeze
	I1229 07:32:39.679698   29689 api_server.go:299] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1229 07:32:39.685442   29689 api_server.go:325] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1229 07:32:39.685469   29689 status.go:463] ha-181620 apiserver status = Running (err=<nil>)
	I1229 07:32:39.685478   29689 status.go:176] ha-181620 status: &{Name:ha-181620 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:32:39.685493   29689 status.go:174] checking status of ha-181620-m02 ...
	I1229 07:32:39.687335   29689 status.go:371] ha-181620-m02 host status = "Stopped" (err=<nil>)
	I1229 07:32:39.687362   29689 status.go:384] host is not running, skipping remaining checks
	I1229 07:32:39.687370   29689 status.go:176] ha-181620-m02 status: &{Name:ha-181620-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:32:39.687391   29689 status.go:174] checking status of ha-181620-m03 ...
	I1229 07:32:39.688856   29689 status.go:371] ha-181620-m03 host status = "Running" (err=<nil>)
	I1229 07:32:39.688882   29689 host.go:66] Checking if "ha-181620-m03" exists ...
	I1229 07:32:39.691634   29689 main.go:144] libmachine: domain ha-181620-m03 has defined MAC address 52:54:00:bb:a8:22 in network mk-ha-181620
	I1229 07:32:39.692150   29689 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:a8:22", ip: ""} in network mk-ha-181620: {Iface:virbr1 ExpiryTime:2025-12-29 08:29:29 +0000 UTC Type:0 Mac:52:54:00:bb:a8:22 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-181620-m03 Clientid:01:52:54:00:bb:a8:22}
	I1229 07:32:39.692173   29689 main.go:144] libmachine: domain ha-181620-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:bb:a8:22 in network mk-ha-181620
	I1229 07:32:39.692319   29689 host.go:66] Checking if "ha-181620-m03" exists ...
	I1229 07:32:39.692558   29689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:32:39.694981   29689 main.go:144] libmachine: domain ha-181620-m03 has defined MAC address 52:54:00:bb:a8:22 in network mk-ha-181620
	I1229 07:32:39.695398   29689 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:a8:22", ip: ""} in network mk-ha-181620: {Iface:virbr1 ExpiryTime:2025-12-29 08:29:29 +0000 UTC Type:0 Mac:52:54:00:bb:a8:22 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-181620-m03 Clientid:01:52:54:00:bb:a8:22}
	I1229 07:32:39.695428   29689 main.go:144] libmachine: domain ha-181620-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:bb:a8:22 in network mk-ha-181620
	I1229 07:32:39.695573   29689 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/ha-181620-m03/id_rsa Username:docker}
	I1229 07:32:39.776860   29689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:32:39.798864   29689 kubeconfig.go:125] found "ha-181620" server: "https://192.168.39.254:8443"
	I1229 07:32:39.798893   29689 api_server.go:166] Checking apiserver status ...
	I1229 07:32:39.798938   29689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:32:39.824651   29689 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2378/cgroup
	I1229 07:32:39.839219   29689 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2378/cgroup
	I1229 07:32:39.852877   29689 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod346aeeb31772d4fe628c9ae565e5833a.slice/docker-d4dfe96d777629e7781b8ae17c2d604cf587bc865192d57d769d8b04058fcd48.scope/cgroup.freeze
	I1229 07:32:39.866722   29689 api_server.go:299] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1229 07:32:39.872470   29689 api_server.go:325] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1229 07:32:39.872506   29689 status.go:463] ha-181620-m03 apiserver status = Running (err=<nil>)
	I1229 07:32:39.872518   29689 status.go:176] ha-181620-m03 status: &{Name:ha-181620-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:32:39.872540   29689 status.go:174] checking status of ha-181620-m04 ...
	I1229 07:32:39.874245   29689 status.go:371] ha-181620-m04 host status = "Running" (err=<nil>)
	I1229 07:32:39.874264   29689 host.go:66] Checking if "ha-181620-m04" exists ...
	I1229 07:32:39.877459   29689 main.go:144] libmachine: domain ha-181620-m04 has defined MAC address 52:54:00:c7:34:22 in network mk-ha-181620
	I1229 07:32:39.877937   29689 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c7:34:22", ip: ""} in network mk-ha-181620: {Iface:virbr1 ExpiryTime:2025-12-29 08:31:39 +0000 UTC Type:0 Mac:52:54:00:c7:34:22 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-181620-m04 Clientid:01:52:54:00:c7:34:22}
	I1229 07:32:39.877972   29689 main.go:144] libmachine: domain ha-181620-m04 has defined IP address 192.168.39.57 and MAC address 52:54:00:c7:34:22 in network mk-ha-181620
	I1229 07:32:39.878133   29689 host.go:66] Checking if "ha-181620-m04" exists ...
	I1229 07:32:39.878365   29689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:32:39.880496   29689 main.go:144] libmachine: domain ha-181620-m04 has defined MAC address 52:54:00:c7:34:22 in network mk-ha-181620
	I1229 07:32:39.880885   29689 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c7:34:22", ip: ""} in network mk-ha-181620: {Iface:virbr1 ExpiryTime:2025-12-29 08:31:39 +0000 UTC Type:0 Mac:52:54:00:c7:34:22 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-181620-m04 Clientid:01:52:54:00:c7:34:22}
	I1229 07:32:39.880917   29689 main.go:144] libmachine: domain ha-181620-m04 has defined IP address 192.168.39.57 and MAC address 52:54:00:c7:34:22 in network mk-ha-181620
	I1229 07:32:39.881116   29689 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/ha-181620-m04/id_rsa Username:docker}
	I1229 07:32:39.961215   29689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:32:39.978839   29689 status.go:176] ha-181620-m04 status: &{Name:ha-181620-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 node start m02 --alsologtostderr -v 5: (28.892638692s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 stop --alsologtostderr -v 5
E1229 07:33:43.091406   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 stop --alsologtostderr -v 5: (42.521885325s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 start --wait true --alsologtostderr -v 5
E1229 07:34:08.200174   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:34:35.883531   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 start --wait true --alsologtostderr -v 5: (1m47.938317132s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 node delete m03 --alsologtostderr -v 5: (6.763349099s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 stop --alsologtostderr -v 5: (39.26802815s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5: exit status 7 (62.054168ms)

                                                
                                                
-- stdout --
	ha-181620
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181620-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181620-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:36:29.328561   31204 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:36:29.329787   31204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:36:29.329938   31204 out.go:374] Setting ErrFile to fd 2...
	I1229 07:36:29.329973   31204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:36:29.330253   31204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:36:29.330481   31204 out.go:368] Setting JSON to false
	I1229 07:36:29.330510   31204 mustload.go:66] Loading cluster: ha-181620
	I1229 07:36:29.330614   31204 notify.go:221] Checking for updates...
	I1229 07:36:29.330975   31204 config.go:182] Loaded profile config "ha-181620": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:36:29.331000   31204 status.go:174] checking status of ha-181620 ...
	I1229 07:36:29.332936   31204 status.go:371] ha-181620 host status = "Stopped" (err=<nil>)
	I1229 07:36:29.332951   31204 status.go:384] host is not running, skipping remaining checks
	I1229 07:36:29.332956   31204 status.go:176] ha-181620 status: &{Name:ha-181620 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:36:29.332972   31204 status.go:174] checking status of ha-181620-m02 ...
	I1229 07:36:29.334113   31204 status.go:371] ha-181620-m02 host status = "Stopped" (err=<nil>)
	I1229 07:36:29.334127   31204 status.go:384] host is not running, skipping remaining checks
	I1229 07:36:29.334131   31204 status.go:176] ha-181620-m02 status: &{Name:ha-181620-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:36:29.334143   31204 status.go:174] checking status of ha-181620-m04 ...
	I1229 07:36:29.335105   31204 status.go:371] ha-181620-m04 host status = "Stopped" (err=<nil>)
	I1229 07:36:29.335121   31204 status.go:384] host is not running, skipping remaining checks
	I1229 07:36:29.335127   31204 status.go:176] ha-181620-m04 status: &{Name:ha-181620-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 start --wait true --alsologtostderr -v 5 --driver=kvm2 
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (1m40.286185104s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 node add --control-plane --alsologtostderr -v 5
E1229 07:38:43.095003   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:39:08.204226   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-181620 node add --control-plane --alsologtostderr -v 5: (1m23.591903835s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-181620 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (36.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-516862 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-516862 --driver=kvm2 : (36.6780588s)
--- PASS: TestImageBuild/serial/Setup (36.68s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-516862
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-516862: (1.561427586s)
--- PASS: TestImageBuild/serial/NormalBuild (1.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-516862
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-516862
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-516862
image_test.go:88: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-516862: (1.002892098s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-575628 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-575628 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m17.970996059s)
--- PASS: TestJSONOutput/start/Command (77.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-575628 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-575628 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-575628 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-575628 --output=json --user=testUser: (14.330308818s)
--- PASS: TestJSONOutput/stop/Command (14.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-955953 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-955953 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.415706ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"66d9e530-b7fd-4792-87c6-dce81af9e61c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-955953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d5f400b-1c2c-44b0-b408-5701f4ef7ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"da0e0e8a-d4d1-4e7e-8197-ba92ef7bd45b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b70e59c1-42a9-42a4-bbe7-c48655e94323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig"}}
	{"specversion":"1.0","id":"72ffcf04-a823-4260-825b-1602596f1471","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube"}}
	{"specversion":"1.0","id":"86db9f04-42fd-4c77-85b6-c3ea65a36d7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4758cb73-0848-4c6a-866e-1543a9799774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4de70577-7943-4762-b5fa-663ab541b403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-955953" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-955953
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (78.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-027808 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-027808 --driver=kvm2 : (37.706391177s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-030379 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-030379 --driver=kvm2 : (38.446427901s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-027808
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-030379
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-030379" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-030379
helpers_test.go:176: Cleaning up "first-027808" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-027808
--- PASS: TestMinikubeProfile (78.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-114934 --memory=3072 --mount-string /tmp/TestMountStartserial71643913/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1229 07:43:26.160295   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-114934 --memory=3072 --mount-string /tmp/TestMountStartserial71643913/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (19.848983238s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-114934 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-114934 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-132048 --memory=3072 --mount-string /tmp/TestMountStartserial71643913/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1229 07:43:43.091231   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-132048 --memory=3072 --mount-string /tmp/TestMountStartserial71643913/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (19.899730427s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132048 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132048 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-114934 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132048 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132048 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-132048
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-132048: (1.360461207s)
--- PASS: TestMountStart/serial/Stop (1.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-132048
E1229 07:44:08.204601   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-132048: (18.521791366s)
--- PASS: TestMountStart/serial/RestartStopped (19.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132048 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132048 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178114 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1229 07:45:31.243774   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178114 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (1m49.993337865s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-178114 -- rollout status deployment/busybox: (3.226174769s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-4dk2b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-b7qxn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-4dk2b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-b7qxn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-4dk2b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-b7qxn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-4dk2b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-4dk2b -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-b7qxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178114 -- exec busybox-769dd8b7dd-b7qxn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-178114 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-178114 -v=5 --alsologtostderr: (46.648795718s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-178114 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp testdata/cp-test.txt multinode-178114:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1125405884/001/cp-test_multinode-178114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114:/home/docker/cp-test.txt multinode-178114-m02:/home/docker/cp-test_multinode-178114_multinode-178114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m02 "sudo cat /home/docker/cp-test_multinode-178114_multinode-178114-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114:/home/docker/cp-test.txt multinode-178114-m03:/home/docker/cp-test_multinode-178114_multinode-178114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m03 "sudo cat /home/docker/cp-test_multinode-178114_multinode-178114-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp testdata/cp-test.txt multinode-178114-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1125405884/001/cp-test_multinode-178114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114-m02:/home/docker/cp-test.txt multinode-178114:/home/docker/cp-test_multinode-178114-m02_multinode-178114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114 "sudo cat /home/docker/cp-test_multinode-178114-m02_multinode-178114.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114-m02:/home/docker/cp-test.txt multinode-178114-m03:/home/docker/cp-test_multinode-178114-m02_multinode-178114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m03 "sudo cat /home/docker/cp-test_multinode-178114-m02_multinode-178114-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp testdata/cp-test.txt multinode-178114-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1125405884/001/cp-test_multinode-178114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114-m03:/home/docker/cp-test.txt multinode-178114:/home/docker/cp-test_multinode-178114-m03_multinode-178114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114 "sudo cat /home/docker/cp-test_multinode-178114-m03_multinode-178114.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 cp multinode-178114-m03:/home/docker/cp-test.txt multinode-178114-m02:/home/docker/cp-test_multinode-178114-m03_multinode-178114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 ssh -n multinode-178114-m02 "sudo cat /home/docker/cp-test_multinode-178114-m03_multinode-178114-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-178114 node stop m03: (1.816754152s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178114 status: exit status 7 (350.999566ms)

                                                
                                                
-- stdout --
	multinode-178114
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178114-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-178114-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr: exit status 7 (351.876934ms)

                                                
                                                
-- stdout --
	multinode-178114
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178114-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-178114-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:47:11.570147   37246 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:47:11.570391   37246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:47:11.570399   37246 out.go:374] Setting ErrFile to fd 2...
	I1229 07:47:11.570403   37246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:47:11.570570   37246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:47:11.570726   37246 out.go:368] Setting JSON to false
	I1229 07:47:11.570750   37246 mustload.go:66] Loading cluster: multinode-178114
	I1229 07:47:11.570853   37246 notify.go:221] Checking for updates...
	I1229 07:47:11.571124   37246 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:47:11.571140   37246 status.go:174] checking status of multinode-178114 ...
	I1229 07:47:11.573223   37246 status.go:371] multinode-178114 host status = "Running" (err=<nil>)
	I1229 07:47:11.573248   37246 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:47:11.576444   37246 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:47:11.577141   37246 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:44:34 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:47:11.577204   37246 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:47:11.577458   37246 host.go:66] Checking if "multinode-178114" exists ...
	I1229 07:47:11.577782   37246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:47:11.580870   37246 main.go:144] libmachine: domain multinode-178114 has defined MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:47:11.581404   37246 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:d2:7c", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:44:34 +0000 UTC Type:0 Mac:52:54:00:52:d2:7c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-178114 Clientid:01:52:54:00:52:d2:7c}
	I1229 07:47:11.581440   37246 main.go:144] libmachine: domain multinode-178114 has defined IP address 192.168.39.92 and MAC address 52:54:00:52:d2:7c in network mk-multinode-178114
	I1229 07:47:11.581645   37246 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114/id_rsa Username:docker}
	I1229 07:47:11.661740   37246 ssh_runner.go:195] Run: systemctl --version
	I1229 07:47:11.668506   37246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:47:11.686983   37246 kubeconfig.go:125] found "multinode-178114" server: "https://192.168.39.92:8443"
	I1229 07:47:11.687019   37246 api_server.go:166] Checking apiserver status ...
	I1229 07:47:11.687070   37246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:47:11.708556   37246 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2406/cgroup
	I1229 07:47:11.720436   37246 ssh_runner.go:195] Run: sudo grep ^0:: /proc/2406/cgroup
	I1229 07:47:11.733010   37246 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38e8599b7482ef471a759bacef66e1e4.slice/docker-cad4676f850186d89109a40e53b4cb403817ed13c388217e2f066902c3701e6a.scope/cgroup.freeze
	I1229 07:47:11.747108   37246 api_server.go:299] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1229 07:47:11.752235   37246 api_server.go:325] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1229 07:47:11.752260   37246 status.go:463] multinode-178114 apiserver status = Running (err=<nil>)
	I1229 07:47:11.752269   37246 status.go:176] multinode-178114 status: &{Name:multinode-178114 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:47:11.752284   37246 status.go:174] checking status of multinode-178114-m02 ...
	I1229 07:47:11.753844   37246 status.go:371] multinode-178114-m02 host status = "Running" (err=<nil>)
	I1229 07:47:11.753860   37246 host.go:66] Checking if "multinode-178114-m02" exists ...
	I1229 07:47:11.756377   37246 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:47:11.756787   37246 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:45:36 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:47:11.756828   37246 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:47:11.756954   37246 host.go:66] Checking if "multinode-178114-m02" exists ...
	I1229 07:47:11.757158   37246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:47:11.759892   37246 main.go:144] libmachine: domain multinode-178114-m02 has defined MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:47:11.760302   37246 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:c7:ce", ip: ""} in network mk-multinode-178114: {Iface:virbr1 ExpiryTime:2025-12-29 08:45:36 +0000 UTC Type:0 Mac:52:54:00:a6:c7:ce Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-178114-m02 Clientid:01:52:54:00:a6:c7:ce}
	I1229 07:47:11.760325   37246 main.go:144] libmachine: domain multinode-178114-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:a6:c7:ce in network mk-multinode-178114
	I1229 07:47:11.760440   37246 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9552/.minikube/machines/multinode-178114-m02/id_rsa Username:docker}
	I1229 07:47:11.843908   37246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:47:11.860011   37246 status.go:176] multinode-178114-m02 status: &{Name:multinode-178114-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:47:11.860071   37246 status.go:174] checking status of multinode-178114-m03 ...
	I1229 07:47:11.861846   37246 status.go:371] multinode-178114-m03 host status = "Stopped" (err=<nil>)
	I1229 07:47:11.861865   37246 status.go:384] host is not running, skipping remaining checks
	I1229 07:47:11.861870   37246 status.go:176] multinode-178114-m03 status: &{Name:multinode-178114-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-178114 node start m03 -v=5 --alsologtostderr: (43.262312571s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (43.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (147.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178114
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-178114
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-178114: (28.780136952s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178114 --wait=true -v=5 --alsologtostderr
E1229 07:48:43.090739   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:49:08.199874   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178114 --wait=true -v=5 --alsologtostderr: (1m58.886727623s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178114
--- PASS: TestMultiNode/serial/RestartKeepsNodes (147.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-178114 node delete m03: (1.771349629s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (27.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-178114 stop: (27.192589644s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178114 status: exit status 7 (63.71327ms)

                                                
                                                
-- stdout --
	multinode-178114
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-178114-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178114 status --alsologtostderr: exit status 7 (69.952425ms)

                                                
                                                
-- stdout --
	multinode-178114
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-178114-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:50:53.033593   38560 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:50:53.033893   38560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:50:53.033906   38560 out.go:374] Setting ErrFile to fd 2...
	I1229 07:50:53.033910   38560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:50:53.034134   38560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:50:53.034322   38560 out.go:368] Setting JSON to false
	I1229 07:50:53.034345   38560 mustload.go:66] Loading cluster: multinode-178114
	I1229 07:50:53.034494   38560 notify.go:221] Checking for updates...
	I1229 07:50:53.034894   38560 config.go:182] Loaded profile config "multinode-178114": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:50:53.034920   38560 status.go:174] checking status of multinode-178114 ...
	I1229 07:50:53.037278   38560 status.go:371] multinode-178114 host status = "Stopped" (err=<nil>)
	I1229 07:50:53.037296   38560 status.go:384] host is not running, skipping remaining checks
	I1229 07:50:53.037301   38560 status.go:176] multinode-178114 status: &{Name:multinode-178114 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:50:53.037317   38560 status.go:174] checking status of multinode-178114-m02 ...
	I1229 07:50:53.038584   38560 status.go:371] multinode-178114-m02 host status = "Stopped" (err=<nil>)
	I1229 07:50:53.038600   38560 status.go:384] host is not running, skipping remaining checks
	I1229 07:50:53.038604   38560 status.go:176] multinode-178114-m02 status: &{Name:multinode-178114-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (27.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178114
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178114-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-178114-m02 --driver=kvm2 : exit status 14 (80.638457ms)

                                                
                                                
-- stdout --
	* [multinode-178114-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-178114-m02' is duplicated with machine name 'multinode-178114-m02' in profile 'multinode-178114'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178114-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178114-m03 --driver=kvm2 : (40.107547703s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-178114
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-178114: exit status 80 (219.802469ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-178114 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-178114-m03 already exists in multinode-178114-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-178114-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.28s)

                                                
                                    
x
+
TestScheduledStopUnix (109.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-119151 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-119151 --memory=3072 --driver=kvm2 : (38.197261017s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119151 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:53:21.012153   39857 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:53:21.012405   39857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:21.012414   39857 out.go:374] Setting ErrFile to fd 2...
	I1229 07:53:21.012418   39857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:21.012643   39857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:53:21.012912   39857 out.go:368] Setting JSON to false
	I1229 07:53:21.013008   39857 mustload.go:66] Loading cluster: scheduled-stop-119151
	I1229 07:53:21.013329   39857 config.go:182] Loaded profile config "scheduled-stop-119151": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:53:21.013435   39857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/scheduled-stop-119151/config.json ...
	I1229 07:53:21.013630   39857 mustload.go:66] Loading cluster: scheduled-stop-119151
	I1229 07:53:21.013735   39857 config.go:182] Loaded profile config "scheduled-stop-119151": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-119151 -n scheduled-stop-119151
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119151 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:53:21.340981   39902 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:53:21.341246   39902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:21.341256   39902 out.go:374] Setting ErrFile to fd 2...
	I1229 07:53:21.341260   39902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:21.341504   39902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:53:21.341826   39902 out.go:368] Setting JSON to false
	I1229 07:53:21.342051   39902 daemonize_unix.go:73] killing process 39891 as it is an old scheduled stop
	I1229 07:53:21.342160   39902 mustload.go:66] Loading cluster: scheduled-stop-119151
	I1229 07:53:21.342473   39902 config.go:182] Loaded profile config "scheduled-stop-119151": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:53:21.342545   39902 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/scheduled-stop-119151/config.json ...
	I1229 07:53:21.342727   39902 mustload.go:66] Loading cluster: scheduled-stop-119151
	I1229 07:53:21.342856   39902 config.go:182] Loaded profile config "scheduled-stop-119151": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1229 07:53:21.348232   13486 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/scheduled-stop-119151/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119151 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1229 07:53:43.094565   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119151 -n scheduled-stop-119151
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-119151
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119151 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:53:47.133975   40068 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:53:47.134211   40068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:47.134219   40068 out.go:374] Setting ErrFile to fd 2...
	I1229 07:53:47.134224   40068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:47.134418   40068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9552/.minikube/bin
	I1229 07:53:47.134660   40068 out.go:368] Setting JSON to false
	I1229 07:53:47.134737   40068 mustload.go:66] Loading cluster: scheduled-stop-119151
	I1229 07:53:47.135092   40068 config.go:182] Loaded profile config "scheduled-stop-119151": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:53:47.135170   40068 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/scheduled-stop-119151/config.json ...
	I1229 07:53:47.135367   40068 mustload.go:66] Loading cluster: scheduled-stop-119151
	I1229 07:53:47.135467   40068 config.go:182] Loaded profile config "scheduled-stop-119151": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1229 07:54:08.204201   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-119151
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-119151: exit status 7 (59.320316ms)

                                                
                                                
-- stdout --
	scheduled-stop-119151
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119151 -n scheduled-stop-119151
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119151 -n scheduled-stop-119151: exit status 7 (59.383397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-119151" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-119151
--- PASS: TestScheduledStopUnix (109.92s)

                                                
                                    
x
+
TestSkaffold (120.08s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe111160154 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-474429 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-474429 --memory=3072 --driver=kvm2 : (37.046762969s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe111160154 run --minikube-profile skaffold-474429 --kube-context skaffold-474429 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe111160154 run --minikube-profile skaffold-474429 --kube-context skaffold-474429 --status-check=true --port-forward=false --interactive=false: (1m10.29121947s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-7bc9f68bbb-z8fft" [ff57e16e-0a8d-4485-8dab-0abe6f51a7ec] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003468432s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-79c444cd59-bzkpn" [8d2e03fb-474d-4f51-9c2e-6fff21a905ed] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003865332s
helpers_test.go:176: Cleaning up "skaffold-474429" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-474429
--- PASS: TestSkaffold (120.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (496s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.339948439 start -p running-upgrade-930016 --memory=3072 --vm-driver=kvm2 
E1229 08:02:11.244540   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:02:42.669869   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.339948439 start -p running-upgrade-930016 --memory=3072 --vm-driver=kvm2 : (3m23.794783699s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-930016 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-930016 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (4m50.590936978s)
helpers_test.go:176: Cleaning up "running-upgrade-930016" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-930016
E1229 08:10:25.222382   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestRunningBinaryUpgrade (496.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (164.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (1m9.108170087s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-226160 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-226160 --alsologtostderr: (3.07550784s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-226160 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-226160 status --format={{.Host}}: exit status 7 (74.399696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 : (44.104567986s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-226160 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (90.549151ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-226160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-226160
	    minikube start -p kubernetes-upgrade-226160 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2261602 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-226160 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 
E1229 08:04:04.590731   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:08.200789   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:27.731819   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:27.737124   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:27.747456   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:27.767929   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:27.808297   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:27.888701   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:28.049203   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:28.370025   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:29.010614   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:30.291775   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:32.852996   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:37.974173   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:48.214946   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-226160 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2 : (46.947705051s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-226160" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-226160
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-226160: (1.021624573s)
--- PASS: TestKubernetesUpgrade (164.51s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (150.41s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-376108 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-376108 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 : (2m15.146332363s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-376108 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-376108 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (1.279345744s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-376108
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-376108: (13.981484574s)
--- PASS: TestPreload/Start-NoPreload-PullImage (150.41s)

                                                
                                    
x
+
TestPause/serial/Start (83.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-729776 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-729776 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m23.667259084s)
--- PASS: TestPause/serial/Start (83.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-729776 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-729776 --alsologtostderr -v=1 --driver=kvm2 : (58.521949354s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (58.55s)

                                                
                                    
x
+
TestISOImage/Setup (24.22s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-908034 --no-kubernetes --memory=2500mb --driver=kvm2 
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-908034 --no-kubernetes --memory=2500mb --driver=kvm2 : (24.216851542s)
--- PASS: TestISOImage/Setup (24.22s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.32s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.32s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.32s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.32s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.36s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.36s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.41s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.41s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-729776 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.60s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-729776 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-729776 --output=json --layout=cluster: exit status 2 (235.788178ms)

                                                
                                                
-- stdout --
	{"Name":"pause-729776","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-729776","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-729776 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-729776 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-729776 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (6.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (6.643787486s)
--- PASS: TestPause/serial/VerifyDeletedResources (6.64s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (64.65s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-376108 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-376108 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m4.251688948s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-376108 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (64.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911723 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-911723 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (96.186627ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-911723] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911723 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E1229 07:59:08.200528   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911723 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m16.568570359s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-911723 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911723 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911723 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (36.807260813s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-911723 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-911723 status -o json: exit status 2 (225.790002ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-911723","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-911723
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911723 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911723 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (28.00114188s)
--- PASS: TestNoKubernetes/serial/Start (28.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22353-9552/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-911723 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-911723 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.64924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-911723
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-911723: (1.464604825s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1229 08:01:30.987350   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3897439410 start -p stopped-upgrade-347266 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3897439410 start -p stopped-upgrade-347266 --memory=3072 --vm-driver=kvm2 : (1m22.015836633s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3897439410 -p stopped-upgrade-347266 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3897439410 -p stopped-upgrade-347266 stop: (5.44868762s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-347266 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-347266 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (42.177606101s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (33.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911723 --driver=kvm2 
E1229 08:01:41.228030   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:02:01.709104   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911723 --driver=kvm2 : (33.886662701s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (33.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-911723 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-911723 "sudo systemctl is-active --quiet service kubelet": exit status 1 (178.561937ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-347266
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-347266: (1.098242314s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (97.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-736414 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
E1229 08:03:43.090840   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-736414 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m37.164016684s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (97.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-433585 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-433585 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0: (1m10.413766219s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-099357 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0
E1229 08:05:08.695866   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-099357 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0: (1m43.427108366s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-736414 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d28c94c2-1458-4fe7-8151-a92d7e9b8792] Pending
helpers_test.go:353: "busybox" [d28c94c2-1458-4fe7-8151-a92d7e9b8792] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d28c94c2-1458-4fe7-8151-a92d7e9b8792] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.010226786s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-736414 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-736414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-736414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.226146913s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-736414 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-736414 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-736414 --alsologtostderr -v=3: (13.34825771s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-736414 -n old-k8s-version-736414
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-736414 -n old-k8s-version-736414: exit status 7 (64.724484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-736414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-736414 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
E1229 08:05:49.656613   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-736414 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (47.903792967s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-736414 -n old-k8s-version-736414
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-433585 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [36e2f8cb-adff-47e3-b0ee-753c6e540c1d] Pending
helpers_test.go:353: "busybox" [36e2f8cb-adff-47e3-b0ee-753c6e540c1d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [36e2f8cb-adff-47e3-b0ee-753c6e540c1d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004863651s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-433585 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-433585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-433585 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-433585 --alsologtostderr -v=3
E1229 08:06:20.747062   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-433585 --alsologtostderr -v=3: (13.609091475s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-433585 -n no-preload-433585
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-433585 -n no-preload-433585: exit status 7 (63.333732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-433585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-433585 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-433585 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0: (53.856705001s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-433585 -n no-preload-433585
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-mn7kn" [821695f4-aaad-4b83-a2ab-71e8b38af8eb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-mn7kn" [821695f4-aaad-4b83-a2ab-71e8b38af8eb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004411052s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-099357 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5c576ac4-d1b9-4a7a-9050-c3083307ebde] Pending
helpers_test.go:353: "busybox" [5c576ac4-d1b9-4a7a-9050-c3083307ebde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5c576ac4-d1b9-4a7a-9050-c3083307ebde] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005476518s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-099357 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-mn7kn" [821695f4-aaad-4b83-a2ab-71e8b38af8eb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005276087s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-736414 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-099357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-099357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005602356s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-099357 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-099357 --alsologtostderr -v=3
E1229 08:06:48.431532   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-099357 --alsologtostderr -v=3: (13.716875081s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-736414 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-736414 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-736414 -n old-k8s-version-736414
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-736414 -n old-k8s-version-736414: exit status 2 (264.619122ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-736414 -n old-k8s-version-736414
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-736414 -n old-k8s-version-736414: exit status 2 (264.283676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-736414 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-736414 -n old-k8s-version-736414
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-736414 -n old-k8s-version-736414
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-219443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-219443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0: (1m24.928353s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-099357 -n embed-certs-099357
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-099357 -n embed-certs-099357: exit status 7 (72.472276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-099357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-099357 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0
E1229 08:07:11.577668   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-099357 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.35.0: (1m2.057333319s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-099357 -n embed-certs-099357
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (62.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-xsdtk" [e833b289-fa08-429b-a642-88984a8a0b2b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-xsdtk" [e833b289-fa08-429b-a642-88984a8a0b2b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.00335935s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-xsdtk" [e833b289-fa08-429b-a642-88984a8a0b2b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005225826s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-433585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-433585 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-433585 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-433585 -n no-preload-433585
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-433585 -n no-preload-433585: exit status 2 (263.461263ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-433585 -n no-preload-433585
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-433585 -n no-preload-433585: exit status 2 (270.311019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-433585 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-433585 -n no-preload-433585
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-433585 -n no-preload-433585
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-996176 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-996176 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0: (49.002552669s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l4plh" [1c104451-d58a-4785-a112-a569e5f8ebff] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00501371s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l4plh" [1c104451-d58a-4785-a112-a569e5f8ebff] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005244393s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-099357 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-099357 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-099357 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-099357 -n embed-certs-099357
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-099357 -n embed-certs-099357: exit status 2 (258.22221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-099357 -n embed-certs-099357
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-099357 -n embed-certs-099357: exit status 2 (244.159976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-099357 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-099357 -n embed-certs-099357
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-099357 -n embed-certs-099357
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m23.532495234s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-219443 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e13a9758-b74e-486b-8faf-d9f010f0ff3c] Pending
helpers_test.go:353: "busybox" [e13a9758-b74e-486b-8faf-d9f010f0ff3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e13a9758-b74e-486b-8faf-d9f010f0ff3c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006201292s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-219443 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-219443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-219443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.348030892s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-219443 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-219443 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-219443 --alsologtostderr -v=3: (13.734544363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-996176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-996176 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-996176 --alsologtostderr -v=3: (6.465849533s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-996176 -n newest-cni-996176
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-996176 -n newest-cni-996176: exit status 7 (63.151397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-996176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-996176 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0
E1229 08:08:43.091553   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-996176 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0: (33.587385711s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-996176 -n newest-cni-996176
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443: exit status 7 (84.047236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-219443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-219443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0
E1229 08:09:08.199897   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-219443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.35.0: (58.018715845s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443
I1229 08:09:41.776004   13486 config.go:182] Loaded profile config "auto-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-996176 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-996176 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-996176 -n newest-cni-996176
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-996176 -n newest-cni-996176: exit status 2 (271.892093ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-996176 -n newest-cni-996176
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-996176 -n newest-cni-996176: exit status 2 (266.443526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-996176 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-996176 -n newest-cni-996176
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-996176 -n newest-cni-996176
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1229 08:09:27.731013   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m9.281244486s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-897217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pxmcd" [18e4e107-1e88-431c-bbb2-9fb14910568b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pxmcd" [18e4e107-1e88-431c-bbb2-9fb14910568b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005192165s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wjw9w" [a8fc59ee-1de7-457e-b669-2d37237c5563] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wjw9w" [a8fc59ee-1de7-457e-b669-2d37237c5563] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.323352774s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wjw9w" [a8fc59ee-1de7-457e-b669-2d37237c5563] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005108881s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-219443 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-219443 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-219443 --alsologtostderr -v=1
E1229 08:09:55.418648   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/gvisor-593112/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443: exit status 2 (255.994589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443: exit status 2 (272.782487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-219443 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-219443 -n default-k8s-diff-port-219443
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m40.182089154s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E1229 08:10:20.101827   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.107195   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.117582   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.137948   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.178281   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.258860   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.419624   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:20.740073   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:21.380707   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:10:22.661979   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m19.610908071s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (104.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m44.778516506s)
--- PASS: TestNetworkPlugins/group/false/Start (104.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-vjmz2" [ec2f4919-ce14-4c68-8530-9c62262eb348] Running
E1229 08:10:30.342736   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004576517s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-897217 "pgrep -a kubelet"
I1229 08:10:33.576335   13486 config.go:182] Loaded profile config "kindnet-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-897217 replace --force -f testdata/netcat-deployment.yaml
I1229 08:10:33.862492   13486 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-sgd2l" [fbfe760c-fb0a-4483-bedb-b902d6981e4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-sgd2l" [fbfe760c-fb0a-4483-bedb-b902d6981e4f] Running
E1229 08:10:40.583257   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003900097s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1229 08:11:03.882282   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/no-preload-433585/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:11:06.443433   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/no-preload-433585/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:11:11.564512   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/no-preload-433585/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:11:20.747372   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/skaffold-474429/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:11:21.805251   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/no-preload-433585/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m36.145701631s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-897217 "pgrep -a kubelet"
I1229 08:11:28.582047   13486 config.go:182] Loaded profile config "custom-flannel-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rl772" [fd0587cb-468f-47f5-8631-e87af6245ac9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rl772" [fd0587cb-468f-47f5-8631-e87af6245ac9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004134558s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-6rdm9" [3eccba6e-43c5-49ed-8c06-c5dcfc82c626] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004686424s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-897217 "pgrep -a kubelet"
I1229 08:11:45.309375   13486 config.go:182] Loaded profile config "calico-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-897217 replace --force -f testdata/netcat-deployment.yaml
I1229 08:11:45.599559   13486 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-g65vc" [f178f8c7-7ab4-4302-ba62-1a5ff14f3e44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-g65vc" [f178f8c7-7ab4-4302-ba62-1a5ff14f3e44] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004729311s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m2.734182448s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-897217 "pgrep -a kubelet"
I1229 08:12:11.017988   13486 config.go:182] Loaded profile config "false-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lqcsh" [601b0be6-ffb0-42a8-86b3-468727acec2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lqcsh" [601b0be6-ffb0-42a8-86b3-468727acec2e] Running
E1229 08:12:23.246693   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/no-preload-433585/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.005702427s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m29.501263799s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (85.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-897217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m25.394453678s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (85.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-897217 "pgrep -a kubelet"
I1229 08:12:39.362151   13486 config.go:182] Loaded profile config "enable-default-cni-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ldkhx" [152a7a5d-6a90-4f5f-9d28-09754de67c5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-ldkhx" [152a7a5d-6a90-4f5f-9d28-09754de67c5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.009539746s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-t8jqg" [255701e3-7216-43c9-8732-c21c6f31e31a] Running
E1229 08:13:03.945604   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/old-k8s-version-736414/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004787059s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-897217 "pgrep -a kubelet"
I1229 08:13:04.921742   13486 config.go:182] Loaded profile config "flannel-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5d2ww" [b56b5d4c-7ad0-4c97-95d2-f0ca61a02837] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5d2ww" [b56b5d4c-7ad0-4c97-95d2-f0ca61a02837] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005753016s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.62s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-053592 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2 
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-053592 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2 : (3.486459248s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-053592" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-053592
--- PASS: TestPreload/PreloadSrc/gcs (3.62s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (17.1s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-713055 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2 
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-713055 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2 : (16.950699319s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-713055" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-713055
--- PASS: TestPreload/PreloadSrc/github (17.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.27s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-351924 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2 
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-351924" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-351924
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.27s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.17s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1766884053-22351
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: f5189b2bdbb6990e595e25e06a017f8901d29fa8
iso_test.go:118:   iso_version: v1.37.0-1766979747-22353
--- PASS: TestISOImage/VersionJSON (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.16s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-908034 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.16s)
E1229 08:13:39.684572   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/default-k8s-diff-port-219443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:13:43.090902   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/addons-909246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-897217 "pgrep -a kubelet"
I1229 08:13:43.925583   13486 config.go:182] Loaded profile config "bridge-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-j2wvc" [bb8bab06-e3c4-4e62-859d-9d69e9219464] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 08:13:45.167507   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/no-preload-433585/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-j2wvc" [bb8bab06-e3c4-4e62-859d-9d69e9219464] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004307601s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-897217 "pgrep -a kubelet"
I1229 08:14:04.010345   13486 config.go:182] Loaded profile config "kubenet-897217": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-897217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ckjrp" [8d53ec82-164c-4b94-9606-03b5d9cd62b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 08:14:08.200831   13486 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9552/.minikube/profiles/functional-695625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-ckjrp" [8d53ec82-164c-4b94-9606-03b5d9cd62b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.0052113s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-897217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-897217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    

Test skip (34/370)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.35.0/cached-images 0
15 TestDownloadOnly/v1.35.0/binaries 0
16 TestDownloadOnly/v1.35.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
215 TestKicCustomNetwork 0
216 TestKicExistingNetwork 0
217 TestKicCustomSubnet 0
218 TestKicStaticIP 0
250 TestChangeNoneUser 0
253 TestScheduledStopWindows 0
257 TestInsufficientStorage 0
261 TestMissingContainerUpgrade 0
276 TestStartStop/group/disable-driver-mounts 0.19
280 TestNetworkPlugins/group/cilium 4.14
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-061054" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-061054
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-897217 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-897217" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-897217

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-897217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897217"

                                                
                                                
----------------------- debugLogs end: cilium-897217 [took: 3.962642633s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-897217" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-897217
--- SKIP: TestNetworkPlugins/group/cilium (4.14s)

                                                
                                    
Copied to clipboard